From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 494DC376C; Wed, 28 Dec 2016 14:16:07 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP; 28 Dec 2016 05:16:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,422,1477983600"; d="scan'208";a="207502629" Received: from irsmsx110.ger.corp.intel.com ([163.33.3.25]) by fmsmga004.fm.intel.com with ESMTP; 28 Dec 2016 05:16:06 -0800 Received: from irsmsx104.ger.corp.intel.com ([169.254.5.142]) by irsmsx110.ger.corp.intel.com ([169.254.15.101]) with mapi id 14.03.0248.002; Wed, 28 Dec 2016 13:16:05 +0000 From: "Bodireddy, Bhanuprakash" To: Rajalakshmi Prabhakar , "dev@dpdk.org" , "users@dpdk.org" Thread-Topic: DPDK: Inter VM communication of iperf3 TCP throughput is very low on same host compare to non DPDK throughput Thread-Index: AQHSYBcCTafrmYGziUuPb5fRrgmi26EdTC7w Date: Wed, 28 Dec 2016 13:16:05 +0000 Message-ID: <7EE4206A5F421D4FBA0A4623185DE2BD0DE20C99@IRSMSX104.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] DPDK: Inter VM communication of iperf3 TCP throughput is very low on same host compare to non DPDK throughput X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 Dec 2016 13:16:09 -0000 >-----Original Message----- >From: Rajalakshmi Prabhakar [mailto:krajalakshmi@tataelxsi.co.in] >Sent: Tuesday, December 27, 2016 9:52 AM >To: dev@dpdk.org; users@dpdk.org >Cc: Bodireddy, Bhanuprakash >Subject: DPDK: Inter VM communication of iperf3 TCP throughput is very low >on same host compare to non DPDK throughput > >Hello, >Kindly Support me to get high throughput in inter VM communication of iper= f3 >TCP in OpenStack DPDK host. I am not sure that I am mailing to the right I= D >sorry for the inconvenience. OVS mailing list should be the appropriate one for the problem you reported= here.=20 Use ovs-discuss@openvswitch.org (or) dev@openvswitch.org.=20 > >Host - ubuntu16.04 >devstack - stable/newton >which install DPDK 16.07 and OVS 2.6 versions >with DPDK plugin and following DPDK configurations >Grub changes >GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash default_hugepagesz=3D1G >hugepagesz=3D1G hugepages=3D8 iommu=3Dpt intel_iommu=3Don" >local.conf - changes for DPDK >enable_plugin networking-ovs-dpdk >https://git.openstack.org/openstack/networking-ovs-dpdk master >OVS_DPDK_MODE=3Dcontroller_ovs_dpdk >OVS_NUM_HUGEPAGES=3D8 >OVS_CORE_MASK=3D2 >OVS_PMD_CORE_MASK=3D4 Only one PMD core is used in your case, scaling the PMD threads can be one = option for higher throughputs.=20 >OVS_DPDK_BIND_PORT=3DFalse >OVS_SOCKET_MEM=3D2048 >OVS_DPDK_VHOST_USER_DEBUG=3Dn >OVS_ALLOCATE_HUGEPAGES=3DTrue >OVS_HUGEPAGE_MOUNT_PAGESIZE=3D1G >MULTI_HOST=3D1 >OVS_DATAPATH_TYPE=3Dnetdev >before VM creation >#nova flavor-key m1.small set hw:mem_page_size=3D1048576 >Able to create two ubuntu instance in flavor m1.small How many cores are assigned for the VM and have you tried CPU pinning optio= ns instead of allowing the threads to float across the cores? >Achieved iperf3 tcp throughput of ~7.5Gbps Are you seeing high drops at the vHost ports and retransmissions? Do you s= ee the same throughput difference with UDP traffic? However I can't explain now the throughput gap you are observing here. Coup= le of things worth checking - For thread starvation (htop to see thread activity on the cores) - I see that you have single socket setup and no QPI involved. As you have= HT enabled, check if appropriate thread siblings are used. - Check pmd thread/port statistics for anomaly.=20 BTW, the responses can be slow at this point due to yearend vacation.=20 Regards, Bhanuprakash.=20 >Ensured the vhostport is created and HugePage is consumed at the end of >2VM created each of 2GB ie 4GB for VMs and 2GB for socket totally 6GB >$ sudo cat /proc/meminfo |grep Huge >AnonHugePages: 0 kB >HugePages_Total: 8 >HugePages_Free: 2 >HugePages_Rsvd: 0 >HugePages_Surp: 0 >Hugepagesize: 1048576 kB >The same scenario carried for without DPDK case of openstack and achieved >higher throughput of ~19Gbps, which is contradictory to the expected resul= ts. >Kindly suggest me what additional DPDK configuration to be done for high >throughput. Also tried cpu pinning and multi queue for OpenStack DPDK but >no improvement in the result. >Test PC is single NUMA only.I am not doing NIC binding as only trying to >validate inter-VM communication in same host. PFB my PC configurations. >$ lscpu >Architecture: x86_64 >CPU op-mode(s): 32-bit, 64-bit >Byte Order: Little Endian >CPU(s): 12 >On-line CPU(s) list: 0-11 >Thread(s) per core: 2 >Core(s) per socket: 6 >Socket(s): 1 >NUMA node(s): 1 >Vendor ID: GenuineIntel >CPU family: 6 >Model: 63 >Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz >Stepping: 2 >CPU MHz: 1212.000 >CPU max MHz: 2400.0000 >CPU min MHz: 1200.0000 >BogoMIPS: 4794.08 >Virtualization: VT-x >L1d cache: 32K >L1i cache: 32K >L2 cache: 256K >L3 cache: 15360K >NUMA node0 CPU(s): 0-11 >Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat >pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1g b >rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology >nonstop_t sc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx >smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic >movbe popcnt tsc_deadline _timer aes xsave avx f16c rdrand lahf_lm abm >epb tpr_shadow vnmi flexpriority ep t vpid fsgsbase tsc_adjust bmi1 avx2 >smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat >pln pts >I am following INSTALL.DPDK.ADVANCED.md but no clue on low throughput. > > >Best Regards >Rajalakshmi Prabhakar > >Specialist - Communication BU | Wireless Division >TATA ELXSI >IITM Research park , Kanagam road ,=A0 Taramani , =A0 Chennai 600 113 =A0 = India >Tel +91 44 66775031 =A0 Cell +91 9789832957 >www.tataelxsi.com