From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 68B237E7B for ; Wed, 22 Oct 2014 09:02:02 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 22 Oct 2014 00:04:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,767,1406617200"; d="scan'208";a="609143763" Received: from pgsmsx101.gar.corp.intel.com ([10.221.44.78]) by fmsmga001.fm.intel.com with ESMTP; 22 Oct 2014 00:09:41 -0700 Received: from pgsmsx106.gar.corp.intel.com (10.221.44.98) by PGSMSX101.gar.corp.intel.com (10.221.44.78) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 22 Oct 2014 15:08:01 +0800 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by pgsmsx106.gar.corp.intel.com (10.221.44.98) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 22 Oct 2014 15:08:00 +0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.207]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.202]) with mapi id 14.03.0195.001; Wed, 22 Oct 2014 15:07:59 +0800 From: "Cao, Waterman" To: Yong Wang , "Patel, Rashmin N" , Stephen Hemminger Thread-Topic: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thread-Index: AQHP5q5Zfhh2mSF44k+g7J+IbQHt/5wt9WiAgAAIjQCADKYsAIABGChw Date: Wed, 22 Oct 2014 07:07:58 +0000 Message-ID: References: <1413181389-14887-1-git-send-email-yongwang@vmware.com>, <6543312.my61QThjD7@xps13>, <1c9ce28892d24052b2a3636507f9dba7@EX13-MBX-026.vmware.com> In-Reply-To: <1c9ce28892d24052b2a3636507f9dba7@EX13-MBX-026.vmware.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Oct 2014 07:02:03 -0000 Hi Yong, We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3f= wd cmd can't run. But We use DPDK1.7_rc1 package to validate VMware regression, It works = fine. . 1.[Test Environment]: - VMware ESXi 5.5; - 2 VM - FC20 on Host / FC20-64 on VM - Crown Pass server (E2680 v2 ivy bridge ) - Niantic 82599 2. [Test Topology]: Create 2VMs (Fedora 18, 64bit) . We pass through one physical port(Niantic 82599) to each VM, and also c= reate one virtual device: vmxnet3 in each VM.=20 To connect with two VMs, we use one vswitch to connect two vmxnet3 interf= ace. Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2. The traffic flow for l2fwd/l3fwd is as below:: Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. (traffic ge= nerator) 3.[ Test Step]: tar dpdk1.8.rc1 ,compile and run; L2fwd: ./build/l2fwd -c f -n 4 -- -p 0x3 L3fwd: ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)" 4.[Error log]: ---VMware L2fwd:--- EAL: 0000:0b:00.0 not managed by UIO driver, skipping EAL: PCI device 0000:13:00.0 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f678ae6e000 EAL: PCI memory mapped at 0x7f678af34000 PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5 PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb EAL: PCI device 0000:1b:00.0 on NUMA socket -1 EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd EAL: PCI memory mapped at 0x7f678af33000 EAL: PCI memory mapped at 0x7f678af32000 EAL: PCI memory mapped at 0x7f678af30000 Lcore 0: RX port 0 Lcore 1: RX port 1 Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=3D0x7f670b0= f5580 hw_ring=3D0x7f6789fe5280 dma_addr=3D0x373e5280 PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are sati= sfied. Rx Burst Bulk Alloc function will be used on port=3D0, queue=3D0. PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX bur= st size no less than 32. PMD: ixgbe_dev_tx_queue_setup(): sw_ring=3D0x7f670b0f3480 hw_ring=3D0x7f671= b820080 dma_addr=3D0x100020080 PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled. done:=20 Port 0, MAC address: 90:E2:BA:4A:33:78 Initializing port 1... EAL: Error - exiting with code: 1 Cause: rte_eth_tx_queue_setup:err=3D-22, port=3D1 ---VMware L3fwd:--- EAL: TSC frequency is ~2793265 KHz EAL: Master core 1 is ready (tid=3D9f49a880) EAL: Core 2 is ready (tid=3D1d7f2700) EAL: PCI device 0000:0b:00.0 on NUMA socket -1 EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd EAL: 0000:0b:00.0 not managed by UIO driver, skipping EAL: PCI device 0000:13:00.0 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f079f3e4000 EAL: PCI memory mapped at 0x7f079f4aa000 PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5 PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb EAL: PCI device 0000:1b:00.0 on NUMA socket -1 EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd EAL: PCI memory mapped at 0x7f079f4a9000 EAL: PCI memory mapped at 0x7f079f4a8000 EAL: PCI memory mapped at 0x7f079f4a6000 Initializing port 0 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Address:= 90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0 LPM: Adding route 0x01010100 / 24 (0) LPM: Adding route 0x02010100 / 24 (1) LPM: Adding route 0x03010100 / 24 (2) LPM: Adding route 0x04010100 / 24 (3) LPM: Adding route 0x05010100 / 24 (4) LPM: Adding route 0x06010100 / 24 (5) LPM: Adding route 0x07010100 / 24 (6) LPM: Adding route 0x08010100 / 24 (7) txq=3D0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=3D0x7f071f6f3c80 hw_ri= ng=3D0x7f079e5e5280 dma_addr=3D0x373e5280 PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled. Initializing port 1 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Address:= 00:0C:29:F0:90:41, txq=3D1,0,0 EAL: Error - exiting with code: 1 Cause: rte_eth_tx_queue_setup: err=3D-22, port=3D1 Can you help to recheck this patch with latest DPDK code? Regards Waterman=20 -----Original Message----- >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang >Sent: Wednesday, October 22, 2014 6:10 AM >To: Patel, Rashmin N; Stephen Hemminger >Cc: dev@dpdk.org >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Rashmin/Stephen, > >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help rev= iew this set of patches. Any other reviews/test verifications are welcome = of course. We have reviewed/tested all patches internally. > >Yong >________________________________________ >From: dev on behalf of Yong Wang >Sent: Monday, October 13, 2014 2:00 PM >To: Thomas Monjalon >Cc: dev@dpdk.org >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Only the last one is performance related and it merely tries to give hints= to the compiler to hopefully make branch prediction more efficient. It al= so moves a constant assignment out of the pkt polling loop. > >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 sock= et: >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi wit= h one core assigned for polling. The client side is pktgen/dpdk, pumping 6= 4B tcp packets at line rate. Before the patch, we are seeing ~900K PPS wit= h 65% cpu of a core used for DPDK. After the patch, we are seeing the same= pkt rate with only 45% of a core used. CPU usage is collected factoring o= ur the idle loop cost. The packet rate is a result of the mode we used for= vmxnet3 (pure emulation mode running default number of hypervisor contexts= ). I can add these info in the review request. > >Yong >________________________________________ >From: Thomas Monjalon >Sent: Monday, October 13, 2014 1:29 PM >To: Yong Wang >Cc: dev@dpdk.org >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Hi, > >2014-10-12 23:23, Yong Wang: >> This patch series include various fixes and improvement to the >> vmxnet3 pmd driver. >> >> Yong Wang (5): >> vmxnet3: Fix VLAN Rx stripping >> vmxnet3: Add VLAN Tx offload >> vmxnet3: Fix dev stop/restart bug >> vmxnet3: Add rx pkt check offloads >> vmxnet3: Some perf improvement on the rx path > >Please, could describe what is the performance gain for these patches? >Benchmark numbers would be appreciated. > >Thanks >-- >Thomas >