From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 4EE7D7E0B for ; Wed, 29 Oct 2014 01:25:51 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 28 Oct 2014 17:34:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="407566612" Received: from pgsmsx103.gar.corp.intel.com ([10.221.44.82]) by FMSMGA003.fm.intel.com with ESMTP; 28 Oct 2014 17:26:24 -0700 Received: from pgsmsx107.gar.corp.intel.com (10.221.44.105) by PGSMSX103.gar.corp.intel.com (10.221.44.82) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 29 Oct 2014 08:33:40 +0800 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by PGSMSX107.gar.corp.intel.com (10.221.44.105) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 29 Oct 2014 08:33:39 +0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.207]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.202]) with mapi id 14.03.0195.001; Wed, 29 Oct 2014 08:33:39 +0800 From: "Cao, Waterman" To: Yong Wang , Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thread-Index: AQHP5q5Zfhh2mSF44k+g7J+IbQHt/5wt9WiAgAAIjQCADKYsAIABGChwgAlqXQCAAFk5gIAA0djg Date: Wed, 29 Oct 2014 00:33:39 +0000 Message-ID: References: <1413181389-14887-1-git-send-email-yongwang@vmware.com> <1c9ce28892d24052b2a3636507f9dba7@EX13-MBX-026.vmware.com> , <2785109.uTPxqbdWuM@xps13> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Oct 2014 00:25:53 -0000 Hi Yong, Let us recheck it again with your instruction. I will response your questions once we get result. Thanks Waterman=20 >-----Original Message----- >From: Yong Wang [mailto:yongwang@vmware.com]=20 >Sent: Wednesday, October 29, 2014 3:59 AM >To: Thomas Monjalon >Cc: dev@dpdk.org; Cao, Waterman >Subject: RE: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Thomas/Waterman, > >I couldn't reproduce the reported issue on v1.8.0-rc1 and both l2fwd and l= 3fwd works fine using the same command posted. > ># dpdk_nic_bind.py --status > >Network devices using DPDK-compatible driver =3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >0000:0b:00.0 'VMXNET3 Ethernet Controller' drv=3Digb_uio unused=3D >0000:13:00.0 'VMXNET3 Ethernet Controller' drv=3Digb_uio unused=3D > >Network devices using kernel driver >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=3Deth2 drv= =3De1000 unused=3Digb_uio *Active* > >Other network devices >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > ># ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)" >... >EAL: TSC frequency is ~2800101 KHz >EAL: Master core 1 is ready (tid=3Dee3c6840) >EAL: Core 2 is ready (tid=3Dde1ff700) >EAL: PCI device 0000:02:00.0 on NUMA socket -1 >EAL: probe driver: 8086:100f rte_em_pmd >EAL: 0000:02:00.0 not managed by UIO driver, skipping >EAL: PCI device 0000:0b:00.0 on NUMA socket -1 >EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd >EAL: PCI memory mapped at 0x7f8bee3dd000 >EAL: PCI memory mapped at 0x7f8bee3dc000 >EAL: PCI memory mapped at 0x7f8bee3da000 >EAL: PCI device 0000:13:00.0 on NUMA socket -1 >EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd >EAL: PCI memory mapped at 0x7f8bee3d9000 >EAL: PCI memory mapped at 0x7f8bee3d8000 >EAL: PCI memory mapped at 0x7f8bee3d6000 >Initializing port 0 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Address= :00:0C:29:72:C6:7E, Allocated mbuf pool on socket 0 >LPM: Adding route 0x01010100 / 24 (0) >LPM: Adding route 0x02010100 / 24 (1) >LPM: Adding route 0x03010100 / 24 (2) >LPM: Adding route 0x04010100 / 24 (3) >LPM: Adding route 0x05010100 / 24 (4) >LPM: Adding route 0x06010100 / 24 (5) >LPM: Adding route 0x07010100 / 24 (6) >LPM: Adding route 0x08010100 / 24 (7) >txq=3D0,0,0 >Initializing port 1 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Address= :00:0C:29:72:C6:88, txq=3D1,0,0=20 > >Initializing rx queues on lcore 1 ... rxq=3D0,0,0 Initializing rx queues o= n lcore 2 ... rxq=3D1,0,0 >done: Port 0 >done: Port 1 >L3FWD: entering main loop on lcore 2 >L3FWD: -- lcoreid=3D2 portid=3D1 rxqueueid=3D0 >L3FWD: entering main loop on lcore 1 >L3FWD: -- lcoreid=3D1 portid=3D0 rxqueueid=3D0 > >I don't have the exact setup but I suspect this is related as the errors l= ooks like a tx queue param used is not supported by vmxnet3 backend. The p= atchset does not touch the txq config path so it's not clear how it breaks = rte_eth_tx_queue_setup(). So my question to Waterman: >(1) Is this a regression on the same branch, i.e. running the unpatched bu= ild works but failed with the patch applied? >(2) By any chance did you change the following struct in main.c for those = sample programs to a different value, in particular txq_flags? > >static const struct rte_eth_txconf tx_conf =3D { > .tx_thresh =3D { > .pthresh =3D TX_PTHRESH, > .hthresh =3D TX_HTHRESH, > .wthresh =3D TX_WTHRESH, > }, > .tx_free_thresh =3D 0, /* Use PMD default values */ > .tx_rs_thresh =3D 0, /* Use PMD default values */ > .txq_flags =3D (ETH_TXQ_FLAGS_NOMULTSEGS | <=3D=3D any changes h= ere? > ETH_TXQ_FLAGS_NOVLANOFFL | > ETH_TXQ_FLAGS_NOXSUMSCTP | > ETH_TXQ_FLAGS_NOXSUMUDP | > ETH_TXQ_FLAGS_NOXSUMTCP) }; > >Thanks, >Yong >________________________________________ >From: Thomas Monjalon >Sent: Tuesday, October 28, 2014 7:40 AM >To: Yong Wang >Cc: dev@dpdk.org; Cao, Waterman >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > >Hi Yong, > >Is there any progress with this patchset? > >Thanks >-- >Thomas > >2014-10-22 07:07, Cao, Waterman: >> Hi Yong, >> >> We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd= and L3fwd cmd can't run. >> But We use DPDK1.7_rc1 package to validate VMware regression, It wor= ks fine. >> . >> 1.[Test Environment]: >> - VMware ESXi 5.5; >> - 2 VM >> - FC20 on Host / FC20-64 on VM >> - Crown Pass server (E2680 v2 ivy bridge ) >> - Niantic 82599 >> >> 2. [Test Topology]: >> Create 2VMs (Fedora 18, 64bit) . >> We pass through one physical port(Niantic 82599) to each VM, and als= o create one virtual device: vmxnet3 in each VM. >> To connect with two VMs, we use one vswitch to connect two vmxnet3= interface. >> Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2. >> The traffic flow for l2fwd/l3fwd is as below:: >> Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia.=20 >> (traffic generator) >> >> 3.[ Test Step]: >> >> tar dpdk1.8.rc1 ,compile and run; >> >> L2fwd: ./build/l2fwd -c f -n 4 -- -p 0x3 >> L3fwd: ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)" >> >> 4.[Error log]: >> >> ---VMware L2fwd:--- >> >> EAL: 0000:0b:00.0 not managed by UIO driver, skipping >> EAL: PCI device 0000:13:00.0 on NUMA socket -1 >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd >> EAL: PCI memory mapped at 0x7f678ae6e000 >> EAL: PCI memory mapped at 0x7f678af34000 >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5 >> PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb >> EAL: PCI device 0000:1b:00.0 on NUMA socket -1 >> EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd >> EAL: PCI memory mapped at 0x7f678af33000 >> EAL: PCI memory mapped at 0x7f678af32000 >> EAL: PCI memory mapped at 0x7f678af30000 >> Lcore 0: RX port 0 >> Lcore 1: RX port 1 >> Initializing port 0... PMD: ixgbe_dev_rx_queue_setup():=20 >> sw_ring=3D0x7f670b0f5580 hw_ring=3D0x7f6789fe5280 dma_addr=3D0x373e5280 >> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are s= atisfied. Rx Burst Bulk Alloc function will be used on port=3D0, queue=3D0. >> PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX = burst size no less than 32. >> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=3D0x7f670b0f3480=20 >> hw_ring=3D0x7f671b820080 dma_addr=3D0x100020080 >> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path >> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled. >> done: >> Port 0, MAC address: 90:E2:BA:4A:33:78 >> >> Initializing port 1... EAL: Error - exiting with code: 1 >> Cause: rte_eth_tx_queue_setup:err=3D-22, port=3D1 >> >> ---VMware L3fwd:--- >> >> EAL: TSC frequency is ~2793265 KHz >> EAL: Master core 1 is ready (tid=3D9f49a880) >> EAL: Core 2 is ready (tid=3D1d7f2700) >> EAL: PCI device 0000:0b:00.0 on NUMA socket -1 >> EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd >> EAL: 0000:0b:00.0 not managed by UIO driver, skipping >> EAL: PCI device 0000:13:00.0 on NUMA socket -1 >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd >> EAL: PCI memory mapped at 0x7f079f3e4000 >> EAL: PCI memory mapped at 0x7f079f4aa000 >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5 >> PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb >> EAL: PCI device 0000:1b:00.0 on NUMA socket -1 >> EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd >> EAL: PCI memory mapped at 0x7f079f4a9000 >> EAL: PCI memory mapped at 0x7f079f4a8000 >> EAL: PCI memory mapped at 0x7f079f4a6000 >> Initializing port 0 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... =20 >> Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0 >> LPM: Adding route 0x01010100 / 24 (0) >> LPM: Adding route 0x02010100 / 24 (1) >> LPM: Adding route 0x03010100 / 24 (2) >> LPM: Adding route 0x04010100 / 24 (3) >> LPM: Adding route 0x05010100 / 24 (4) >> LPM: Adding route 0x06010100 / 24 (5) >> LPM: Adding route 0x07010100 / 24 (6) >> LPM: Adding route 0x08010100 / 24 (7) >> txq=3D0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=3D0x7f071f6f3c80=20 >> hw_ring=3D0x7f079e5e5280 dma_addr=3D0x373e5280 >> PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path >> PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled. >> >> Initializing port 1 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Addre= ss:00:0C:29:F0:90:41, txq=3D1,0,0 EAL: Error - exiting with code: 1 >> Cause: rte_eth_tx_queue_setup: err=3D-22, port=3D1 >> >> >> Can you help to recheck this patch with latest DPDK code? >> >> Regards >> Waterman >> >> -----Original Message----- >> >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang >> >Sent: Wednesday, October 22, 2014 6:10 AM >> >To: Patel, Rashmin N; Stephen Hemminger >> >Cc: dev@dpdk.org >> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement >> > >> >Rashmin/Stephen, >> > >> >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help = review this set of patches. Any other reviews/test verifications are welco= me of course. We have reviewed/tested all patches internally. >> > >> >Yong >> >________________________________________ >> >From: dev on behalf of Yong Wang=20 >> > >> >Sent: Monday, October 13, 2014 2:00 PM >> >To: Thomas Monjalon >> >Cc: dev@dpdk.org >> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement >> > >> >Only the last one is performance related and it merely tries to give hi= nts to the compiler to hopefully make branch prediction more efficient. It= also moves a constant assignment out of the pkt polling loop. >> > >> >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 s= ocket: >> >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi = with one core assigned for polling. The client side is pktgen/dpdk, pumpin= g 64B tcp packets at line rate. Before the patch, we are seeing ~900K PPS = with 65% cpu of a core used for DPDK. After the patch, we are seeing the s= ame pkt rate with only 45% of a core used. CPU usage is collected factorin= g our the idle loop cost. The packet rate is a result of the mode we used = for vmxnet3 (pure emulation mode running default number of hypervisor conte= xts). I can add these info in the review request. >> > >> >Yong >> >________________________________________ >> >From: Thomas Monjalon >> >Sent: Monday, October 13, 2014 1:29 PM >> >To: Yong Wang >> >Cc: dev@dpdk.org >> >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement >> > >> >Hi, >> > >> >2014-10-12 23:23, Yong Wang: >> >> This patch series include various fixes and improvement to the >> >> vmxnet3 pmd driver. >> >> >> >> Yong Wang (5): >> >> vmxnet3: Fix VLAN Rx stripping >> >> vmxnet3: Add VLAN Tx offload >> >> vmxnet3: Fix dev stop/restart bug >> >> vmxnet3: Add rx pkt check offloads >> >> vmxnet3: Some perf improvement on the rx path >> > >> >Please, could describe what is the performance gain for these patches? >> >Benchmark numbers would be appreciated. >> > >> >Thanks >> >-- >> >Thomas