From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp-outbound-1.vmware.com (smtp-outbound-1.vmware.com [208.91.2.12]) by dpdk.org (Postfix) with ESMTP id 43AE25918 for ; Mon, 13 Oct 2014 22:53:22 +0200 (CEST) Received: from sc9-mailhost3.vmware.com (sc9-mailhost3.vmware.com [10.113.161.73]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id 37B40289A7; Mon, 13 Oct 2014 14:00:50 -0700 (PDT) Received: from EX13-CAS-007.vmware.com (EX13-CAS-007.vmware.com [10.113.191.57]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 31D8D4146B; Mon, 13 Oct 2014 14:00:50 -0700 (PDT) Received: from EX13-MBX-026.vmware.com (10.113.191.46) by EX13-MBX-001.vmware.com (10.113.191.21) with Microsoft SMTP Server (TLS) id 15.0.775.38; Mon, 13 Oct 2014 14:00:49 -0700 Received: from EX13-MBX-026.vmware.com ([fe80::858b:7f42:fd7c:703d]) by EX13-MBX-026.vmware.com ([fe80::858b:7f42:fd7c:703d%17]) with mapi id 15.00.0775.031; Mon, 13 Oct 2014 14:00:28 -0700 From: Yong Wang To: Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thread-Index: AQHP5q5KH9mlPLN+w0idHx0nI4l+ZZwu8N2A//+PnTA= Date: Mon, 13 Oct 2014 21:00:28 +0000 Message-ID: References: <1413181389-14887-1-git-send-email-yongwang@vmware.com>, <6543312.my61QThjD7@xps13> In-Reply-To: <6543312.my61QThjD7@xps13> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.113.160.246] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Oct 2014 20:53:22 -0000 Only the last one is performance related and it merely tries to give hints = to the compiler to hopefully make branch prediction more efficient. It als= o moves a constant assignment out of the pkt polling loop.=0A= =0A= We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socke= t:=0A= On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with= one core assigned for polling. The client side is pktgen/dpdk, pumping 64= B tcp packets at line rate. Before the patch, we are seeing ~900K PPS with= 65% cpu of a core used for DPDK. After the patch, we are seeing the same = pkt rate with only 45% of a core used. CPU usage is collected factoring ou= r the idle loop cost. The packet rate is a result of the mode we used for = vmxnet3 (pure emulation mode running default number of hypervisor contexts)= . I can add these info in the review request.=0A= =0A= Yong=0A= ________________________________________=0A= From: Thomas Monjalon =0A= Sent: Monday, October 13, 2014 1:29 PM=0A= To: Yong Wang=0A= Cc: dev@dpdk.org=0A= Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement=0A= =0A= Hi,=0A= =0A= 2014-10-12 23:23, Yong Wang:=0A= > This patch series include various fixes and improvement to the=0A= > vmxnet3 pmd driver.=0A= >=0A= > Yong Wang (5):=0A= > vmxnet3: Fix VLAN Rx stripping=0A= > vmxnet3: Add VLAN Tx offload=0A= > vmxnet3: Fix dev stop/restart bug=0A= > vmxnet3: Add rx pkt check offloads=0A= > vmxnet3: Some perf improvement on the rx path=0A= =0A= Please, could describe what is the performance gain for these patches?=0A= Benchmark numbers would be appreciated.=0A= =0A= Thanks=0A= --=0A= Thomas=0A=