From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 01C412E89 for ; Wed, 5 Nov 2014 02:24:53 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 04 Nov 2014 17:34:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,316,1413270000"; d="scan'208";a="626676990" Received: from pgsmsx101.gar.corp.intel.com ([10.221.44.78]) by fmsmga002.fm.intel.com with ESMTP; 04 Nov 2014 17:34:12 -0800 Received: from pgsmsx104.gar.corp.intel.com (10.221.44.91) by PGSMSX101.gar.corp.intel.com (10.221.44.78) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 5 Nov 2014 09:32:18 +0800 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by PGSMSX104.gar.corp.intel.com (10.221.44.91) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 5 Nov 2014 09:32:18 +0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.207]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.13]) with mapi id 14.03.0195.001; Wed, 5 Nov 2014 09:32:16 +0800 From: "Cao, Waterman" To: 'Yong Wang' , Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thread-Index: AQHP5q5Zfhh2mSF44k+g7J+IbQHt/5wt9WiAgAAIjQCADV0t0A== Date: Wed, 5 Nov 2014 01:32:16 +0000 Message-ID: References: <1413181389-14887-1-git-send-email-yongwang@vmware.com>, <6543312.my61QThjD7@xps13> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Nov 2014 01:24:54 -0000 Hi Yong, We tested your patch with VMWare ESX 5.5. It works fine with R1.8 RC1.=20 You can find more details from Xiaonan's reports. Regards Waterman=20 >-----Original Message----- >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang >Sent: Tuesday, October 14, 2014 5:00 AM >To: Thomas Monjalon >Cc: dev@dpdk.org >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Only the last one is performance related and it merely tries to give hints= to the compiler to hopefully make branch prediction more efficient. It al= so moves a constant assignment out of the pkt polling loop. > >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 sock= et: >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi wit= h one core assigned for polling. The client side is pktgen/dpdk, pumping 6= 4B tcp packets at line rate. Before the patch, we are seeing ~900K PPS wit= h 65% cpu of a core used for DPDK. After the patch, we are seeing the same= pkt rate with only 45% of a core used. CPU usage is collected factoring o= ur the idle loop cost. The packet rate is a result of the mode we used for= vmxnet3 (pure emulation mode running default number of hypervisor contexts= ). I can add these info in the review request. > >Yong >________________________________________ >From: Thomas Monjalon >Sent: Monday, October 13, 2014 1:29 PM >To: Yong Wang >Cc: dev@dpdk.org >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Hi, > >2014-10-12 23:23, Yong Wang: >> This patch series include various fixes and improvement to the >> vmxnet3 pmd driver. >> >> Yong Wang (5): >> vmxnet3: Fix VLAN Rx stripping >> vmxnet3: Add VLAN Tx offload >> vmxnet3: Fix dev stop/restart bug >> vmxnet3: Add rx pkt check offloads >> vmxnet3: Some perf improvement on the rx path > >Please, could describe what is the performance gain for these patches? >Benchmark numbers would be appreciated. > >Thanks >-- >Thomas -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang Sent: Tuesday, October 14, 2014 5:00 AM To: Thomas Monjalon Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Only the last one is performance related and it merely tries to give hints = to the compiler to hopefully make branch prediction more efficient. It als= o moves a constant assignment out of the pkt polling loop. We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socke= t: On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with= one core assigned for polling. The client side is pktgen/dpdk, pumping 64= B tcp packets at line rate. Before the patch, we are seeing ~900K PPS with= 65% cpu of a core used for DPDK. After the patch, we are seeing the same = pkt rate with only 45% of a core used. CPU usage is collected factoring ou= r the idle loop cost. The packet rate is a result of the mode we used for = vmxnet3 (pure emulation mode running default number of hypervisor contexts)= . I can add these info in the review request. Yong ________________________________________ From: Thomas Monjalon Sent: Monday, October 13, 2014 1:29 PM To: Yong Wang Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Hi, 2014-10-12 23:23, Yong Wang: > This patch series include various fixes and improvement to the > vmxnet3 pmd driver. > > Yong Wang (5): > vmxnet3: Fix VLAN Rx stripping > vmxnet3: Add VLAN Tx offload > vmxnet3: Fix dev stop/restart bug > vmxnet3: Add rx pkt check offloads > vmxnet3: Some perf improvement on the rx path Please, could describe what is the performance gain for these patches? Benchmark numbers would be appreciated. Thanks -- Thomas