From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 931EB3772 for ; Wed, 22 Jul 2015 10:06:39 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP; 22 Jul 2015 01:06:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,522,1432623600"; d="scan'208";a="610587565" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by orsmga003.jf.intel.com with ESMTP; 22 Jul 2015 01:06:37 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.201]) by IRSMSX151.ger.corp.intel.com ([169.254.4.108]) with mapi id 14.03.0224.002; Wed, 22 Jul 2015 09:06:36 +0100 From: "Gray, Mark D" To: Jun Xiao Thread-Topic: [ovs-discuss] vswitches performance comparison Thread-Index: AQHQw99ATEbPofzxDUevAi0fkt6lRp3mOhog///zYICAABHVsP//8/SAgADupvA= Date: Wed, 22 Jul 2015 08:06:36 +0000 Message-ID: <738D45BC1F695740A983F43CFE1B7EA92E2BF8CC@IRSMSX108.ger.corp.intel.com> References: <----Tc------lRRzc$3e501353-5ebe-4161-b9d4-01ebdf81a6de@cloudnetengine.com> <738D45BC1F695740A983F43CFE1B7EA92E2BEF4B@IRSMSX108.ger.corp.intel.com> <738D45BC1F695740A983F43CFE1B7EA92E2BEFD4@IRSMSX108.ger.corp.intel.com> <9D0E6ED2-6171-4EF5-AD21-01B1844B5136@cloudnetengine.com> In-Reply-To: <9D0E6ED2-6171-4EF5-AD21-01B1844B5136@cloudnetengine.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: dev , discuss Subject: Re: [dpdk-dev] [ovs-discuss] vswitches performance comparison X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Jul 2015 08:06:40 -0000 > >> > >> I'd like to hope that's my methodology problem, but I just follow the > >> installation guide without any customization. > >> > >> Hi Mark, do you have any performance data share with us? Maybe we are > >> using different type of workloads, like I mentioned I am using > >> typical data center workload, I guess you are talking about NFV type o= f > workload? > > > > The number getting floated around on the mailing list recently is > > 16.5Mpps for phy-phy. However, I don't think we have any iperf data > > off-hand for your usecase. When we test throughput into the vm we > > usually generate the traffic externally and send NIC->OVS->VM->OVS- > >NIC. This is a little different to your setup. > > >=20 > I guess pmd driver is used inside VM in that case, right? Yes, but even when we use virtio-net we see the same if not *slightly* bett= er performance. > > I do know, however, that ovs-dpdk typically has a much larger > > throughput than the kernel space datapath. > > >=20 > I'd like to say it depends on workloads, for small/medium packet size > workload, that's definitely true, while for TSO size workload, it's not t= hat > obvious (or worse) as data path overheads are amortized and H/W can be > leveraged. For large packets the switch will eventually saturate the NIC at line rate = but the total aggregate throughput of the switch should be faster (you could add more interfaces for example to take advantage of that).=20 TSO is missing from the DPDK ports at the moment but it is something we plan to look at. We are currently enabling Jumbo frames (which don't work at the moment). > > Have you seen this? > > https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_ > > use_cases > > >=20 > Thanks for the pointer, I'll try later. > >> > >> Thanks, > >> Jun