From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp-outbound-1.vmware.com (smtp-outbound-1.vmware.com [208.91.2.12]) by dpdk.org (Postfix) with ESMTP id 9B81E7E80 for ; Tue, 28 Oct 2014 20:51:04 +0100 (CET) Received: from sc9-mailhost2.vmware.com (sc9-mailhost2.vmware.com [10.113.161.72]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id AB31228335; Tue, 28 Oct 2014 12:59:43 -0700 (PDT) Received: from EX13-CAS-010.vmware.com (EX13-CAS-010.vmware.com [10.113.191.62]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id A6C36B18F6; Tue, 28 Oct 2014 12:59:43 -0700 (PDT) Received: from EX13-MBX-026.vmware.com (10.113.191.46) by EX13-MBX-012.vmware.com (10.113.191.32) with Microsoft SMTP Server (TLS) id 15.0.775.38; Tue, 28 Oct 2014 12:59:43 -0700 Received: from EX13-MBX-026.vmware.com ([fe80::858b:7f42:fd7c:703d]) by EX13-MBX-026.vmware.com ([fe80::858b:7f42:fd7c:703d%17]) with mapi id 15.00.0775.031; Tue, 28 Oct 2014 12:59:25 -0700 From: Yong Wang To: Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement Thread-Index: AQHP5q5KH9mlPLN+w0idHx0nI4l+ZZwu8N2A//+PnTCADKeklIABDa8AgAnsTgD//+PLZQ== Date: Tue, 28 Oct 2014 19:59:25 +0000 Message-ID: References: <1413181389-14887-1-git-send-email-yongwang@vmware.com> <1c9ce28892d24052b2a3636507f9dba7@EX13-MBX-026.vmware.com> , <2785109.uTPxqbdWuM@xps13> In-Reply-To: <2785109.uTPxqbdWuM@xps13> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.113.160.246] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Oct 2014 19:51:05 -0000 Thomas/Waterman,=0A= =0A= I couldn't reproduce the reported issue on v1.8.0-rc1 and both l2fwd and l3= fwd works fine using the same command posted.=0A= =0A= # dpdk_nic_bind.py --status=0A= =0A= Network devices using DPDK-compatible driver=0A= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= 0000:0b:00.0 'VMXNET3 Ethernet Controller' drv=3Digb_uio unused=3D=0A= 0000:13:00.0 'VMXNET3 Ethernet Controller' drv=3Digb_uio unused=3D=0A= =0A= Network devices using kernel driver=0A= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= 0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=3Deth2 drv= =3De1000 unused=3Digb_uio *Active*=0A= =0A= Other network devices=0A= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= =0A= =0A= # ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"=0A= ...=0A= EAL: TSC frequency is ~2800101 KHz=0A= EAL: Master core 1 is ready (tid=3Dee3c6840)=0A= EAL: Core 2 is ready (tid=3Dde1ff700)=0A= EAL: PCI device 0000:02:00.0 on NUMA socket -1=0A= EAL: probe driver: 8086:100f rte_em_pmd=0A= EAL: 0000:02:00.0 not managed by UIO driver, skipping=0A= EAL: PCI device 0000:0b:00.0 on NUMA socket -1=0A= EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd=0A= EAL: PCI memory mapped at 0x7f8bee3dd000=0A= EAL: PCI memory mapped at 0x7f8bee3dc000=0A= EAL: PCI memory mapped at 0x7f8bee3da000=0A= EAL: PCI device 0000:13:00.0 on NUMA socket -1=0A= EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd=0A= EAL: PCI memory mapped at 0x7f8bee3d9000=0A= EAL: PCI memory mapped at 0x7f8bee3d8000=0A= EAL: PCI memory mapped at 0x7f8bee3d6000=0A= Initializing port 0 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Address:= 00:0C:29:72:C6:7E, Allocated mbuf pool on socket 0=0A= LPM: Adding route 0x01010100 / 24 (0)=0A= LPM: Adding route 0x02010100 / 24 (1)=0A= LPM: Adding route 0x03010100 / 24 (2)=0A= LPM: Adding route 0x04010100 / 24 (3)=0A= LPM: Adding route 0x05010100 / 24 (4)=0A= LPM: Adding route 0x06010100 / 24 (5)=0A= LPM: Adding route 0x07010100 / 24 (6)=0A= LPM: Adding route 0x08010100 / 24 (7)=0A= txq=3D0,0,0 =0A= Initializing port 1 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Address:= 00:0C:29:72:C6:88, txq=3D1,0,0 =0A= =0A= Initializing rx queues on lcore 1 ... rxq=3D0,0,0 =0A= Initializing rx queues on lcore 2 ... rxq=3D1,0,0 =0A= done: Port 0=0A= done: Port 1=0A= L3FWD: entering main loop on lcore 2=0A= L3FWD: -- lcoreid=3D2 portid=3D1 rxqueueid=3D0=0A= L3FWD: entering main loop on lcore 1=0A= L3FWD: -- lcoreid=3D1 portid=3D0 rxqueueid=3D0=0A= =0A= I don't have the exact setup but I suspect this is related as the errors lo= oks like a tx queue param used is not supported by vmxnet3 backend. The pa= tchset does not touch the txq config path so it's not clear how it breaks r= te_eth_tx_queue_setup(). So my question to Waterman:=0A= (1) Is this a regression on the same branch, i.e. running the unpatched bui= ld works but failed with the patch applied?=0A= (2) By any chance did you change the following struct in main.c for those s= ample programs to a different value, in particular txq_flags?=0A= =0A= static const struct rte_eth_txconf tx_conf =3D {=0A= .tx_thresh =3D {=0A= .pthresh =3D TX_PTHRESH,=0A= .hthresh =3D TX_HTHRESH,=0A= .wthresh =3D TX_WTHRESH,=0A= },=0A= .tx_free_thresh =3D 0, /* Use PMD default values */=0A= .tx_rs_thresh =3D 0, /* Use PMD default values */=0A= .txq_flags =3D (ETH_TXQ_FLAGS_NOMULTSEGS | <=3D=3D any changes he= re?=0A= ETH_TXQ_FLAGS_NOVLANOFFL |=0A= ETH_TXQ_FLAGS_NOXSUMSCTP |=0A= ETH_TXQ_FLAGS_NOXSUMUDP |=0A= ETH_TXQ_FLAGS_NOXSUMTCP)=0A= };=0A= =0A= Thanks,=0A= Yong=0A= ________________________________________=0A= From: Thomas Monjalon =0A= Sent: Tuesday, October 28, 2014 7:40 AM=0A= To: Yong Wang=0A= Cc: dev@dpdk.org; Cao, Waterman=0A= Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement=0A= =0A= Hi Yong,=0A= =0A= Is there any progress with this patchset?=0A= =0A= Thanks=0A= --=0A= Thomas=0A= =0A= 2014-10-22 07:07, Cao, Waterman:=0A= > Hi Yong,=0A= >=0A= > We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd = and L3fwd cmd can't run.=0A= > But We use DPDK1.7_rc1 package to validate VMware regression, It work= s fine.=0A= > .=0A= > 1.[Test Environment]:=0A= > - VMware ESXi 5.5;=0A= > - 2 VM=0A= > - FC20 on Host / FC20-64 on VM=0A= > - Crown Pass server (E2680 v2 ivy bridge )=0A= > - Niantic 82599=0A= >=0A= > 2. [Test Topology]:=0A= > Create 2VMs (Fedora 18, 64bit) .=0A= > We pass through one physical port(Niantic 82599) to each VM, and also= create one virtual device: vmxnet3 in each VM.=0A= > To connect with two VMs, we use one vswitch to connect two vmxnet3 = interface.=0A= > Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2.=0A= > The traffic flow for l2fwd/l3fwd is as below::=0A= > Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. (tra= ffic generator)=0A= >=0A= > 3.[ Test Step]:=0A= >=0A= > tar dpdk1.8.rc1 ,compile and run;=0A= >=0A= > L2fwd: ./build/l2fwd -c f -n 4 -- -p 0x3=0A= > L3fwd: ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)"= =0A= >=0A= > 4.[Error log]:=0A= >=0A= > ---VMware L2fwd:---=0A= >=0A= > EAL: 0000:0b:00.0 not managed by UIO driver, skipping=0A= > EAL: PCI device 0000:13:00.0 on NUMA socket -1=0A= > EAL: probe driver: 8086:10fb rte_ixgbe_pmd=0A= > EAL: PCI memory mapped at 0x7f678ae6e000=0A= > EAL: PCI memory mapped at 0x7f678af34000=0A= > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5=0A= > PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb=0A= > EAL: PCI device 0000:1b:00.0 on NUMA socket -1=0A= > EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd=0A= > EAL: PCI memory mapped at 0x7f678af33000=0A= > EAL: PCI memory mapped at 0x7f678af32000=0A= > EAL: PCI memory mapped at 0x7f678af30000=0A= > Lcore 0: RX port 0=0A= > Lcore 1: RX port 1=0A= > Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=3D0x7f670= b0f5580 hw_ring=3D0x7f6789fe5280 dma_addr=3D0x373e5280=0A= > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are sa= tisfied. Rx Burst Bulk Alloc function will be used on port=3D0, queue=3D0.= =0A= > PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX b= urst size no less than 32.=0A= > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=3D0x7f670b0f3480 hw_ring=3D0x7f6= 71b820080 dma_addr=3D0x100020080=0A= > PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path=0A= > PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.=0A= > done:=0A= > Port 0, MAC address: 90:E2:BA:4A:33:78=0A= >=0A= > Initializing port 1... EAL: Error - exiting with code: 1=0A= > Cause: rte_eth_tx_queue_setup:err=3D-22, port=3D1=0A= >=0A= > ---VMware L3fwd:---=0A= >=0A= > EAL: TSC frequency is ~2793265 KHz=0A= > EAL: Master core 1 is ready (tid=3D9f49a880)=0A= > EAL: Core 2 is ready (tid=3D1d7f2700)=0A= > EAL: PCI device 0000:0b:00.0 on NUMA socket -1=0A= > EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd=0A= > EAL: 0000:0b:00.0 not managed by UIO driver, skipping=0A= > EAL: PCI device 0000:13:00.0 on NUMA socket -1=0A= > EAL: probe driver: 8086:10fb rte_ixgbe_pmd=0A= > EAL: PCI memory mapped at 0x7f079f3e4000=0A= > EAL: PCI memory mapped at 0x7f079f4aa000=0A= > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5=0A= > PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb=0A= > EAL: PCI device 0000:1b:00.0 on NUMA socket -1=0A= > EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd=0A= > EAL: PCI memory mapped at 0x7f079f4a9000=0A= > EAL: PCI memory mapped at 0x7f079f4a8000=0A= > EAL: PCI memory mapped at 0x7f079f4a6000=0A= > Initializing port 0 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Addres= s:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0=0A= > LPM: Adding route 0x01010100 / 24 (0)=0A= > LPM: Adding route 0x02010100 / 24 (1)=0A= > LPM: Adding route 0x03010100 / 24 (2)=0A= > LPM: Adding route 0x04010100 / 24 (3)=0A= > LPM: Adding route 0x05010100 / 24 (4)=0A= > LPM: Adding route 0x06010100 / 24 (5)=0A= > LPM: Adding route 0x07010100 / 24 (6)=0A= > LPM: Adding route 0x08010100 / 24 (7)=0A= > txq=3D0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=3D0x7f071f6f3c80 hw_= ring=3D0x7f079e5e5280 dma_addr=3D0x373e5280=0A= > PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path=0A= > PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled.=0A= >=0A= > Initializing port 1 ... Creating queues: nb_rxq=3D1 nb_txq=3D1... Addres= s:00:0C:29:F0:90:41, txq=3D1,0,0 EAL: Error - exiting with code: 1=0A= > Cause: rte_eth_tx_queue_setup: err=3D-22, port=3D1=0A= >=0A= >=0A= > Can you help to recheck this patch with latest DPDK code?=0A= >=0A= > Regards=0A= > Waterman=0A= >=0A= > -----Original Message-----=0A= > >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang=0A= > >Sent: Wednesday, October 22, 2014 6:10 AM=0A= > >To: Patel, Rashmin N; Stephen Hemminger=0A= > >Cc: dev@dpdk.org=0A= > >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement=0A= > >=0A= > >Rashmin/Stephen,=0A= > >=0A= > >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help r= eview this set of patches. Any other reviews/test verifications are welcom= e of course. We have reviewed/tested all patches internally.=0A= > >=0A= > >Yong=0A= > >________________________________________=0A= > >From: dev on behalf of Yong Wang =0A= > >Sent: Monday, October 13, 2014 2:00 PM=0A= > >To: Thomas Monjalon=0A= > >Cc: dev@dpdk.org=0A= > >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement=0A= > >=0A= > >Only the last one is performance related and it merely tries to give hin= ts to the compiler to hopefully make branch prediction more efficient. It = also moves a constant assignment out of the pkt polling loop.=0A= > >=0A= > >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 so= cket:=0A= > >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi w= ith one core assigned for polling. The client side is pktgen/dpdk, pumping= 64B tcp packets at line rate. Before the patch, we are seeing ~900K PPS w= ith 65% cpu of a core used for DPDK. After the patch, we are seeing the sa= me pkt rate with only 45% of a core used. CPU usage is collected factoring= our the idle loop cost. The packet rate is a result of the mode we used f= or vmxnet3 (pure emulation mode running default number of hypervisor contex= ts). I can add these info in the review request.=0A= > >=0A= > >Yong=0A= > >________________________________________=0A= > >From: Thomas Monjalon =0A= > >Sent: Monday, October 13, 2014 1:29 PM=0A= > >To: Yong Wang=0A= > >Cc: dev@dpdk.org=0A= > >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement=0A= > >=0A= > >Hi,=0A= > >=0A= > >2014-10-12 23:23, Yong Wang:=0A= > >> This patch series include various fixes and improvement to the=0A= > >> vmxnet3 pmd driver.=0A= > >>=0A= > >> Yong Wang (5):=0A= > >> vmxnet3: Fix VLAN Rx stripping=0A= > >> vmxnet3: Add VLAN Tx offload=0A= > >> vmxnet3: Fix dev stop/restart bug=0A= > >> vmxnet3: Add rx pkt check offloads=0A= > >> vmxnet3: Some perf improvement on the rx path=0A= > >=0A= > >Please, could describe what is the performance gain for these patches?= =0A= > >Benchmark numbers would be appreciated.=0A= > >=0A= > >Thanks=0A= > >--=0A= > >Thomas=0A= =0A=