From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f47.google.com (mail-wg0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 2AA987E75 for ; Tue, 28 Oct 2014 15:31:36 +0100 (CET) Received: by mail-wg0-f47.google.com with SMTP id a1so1020513wgh.30 for ; Tue, 28 Oct 2014 07:40:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:organization :user-agent:in-reply-to:references:mime-version :content-transfer-encoding:content-type; bh=SEkAR5jm4XMxLRBewOp2kWOODYmkkVjcqlEuySBPs4o=; b=O2JXWA2fASrKDTNh/eGlsRsLO0PHVfHwqyzit53yglnNUXibjIctrD6SnXfXje5PmZ nwbwk0vZekUB7z/d5uHk8isqcuKwFG+yVNmnTmgNQC1VaLEotnQVMpSAF/oNSXWls4M9 vnwLrG6jFrCu++xTGqkHfufldzRgbWzF9AR5ozMRfQu3f474Gq8VEQWt29bJGFSuJRTR qACMHWm/b2bt2UUvI4NBjGvWQZBP8cex9lxjh8huEwIZRn1aHIVgmTVJX6jyfXC/njdK SucDx9do6cx0i4jvMx61Oe6XolS6gglS9L6AhroK4p6XP/MAPsds+EFNejCD129A/j9A kKiw== X-Gm-Message-State: ALoCoQmmg62WwX3mZJUb9V6Y9FCYS/XXH5Ekzli0vlFw4Llq76eILra7xAc34mhcFDVdEeWCCFb2 X-Received: by 10.181.27.161 with SMTP id jh1mr5133672wid.75.1414507223587; Tue, 28 Oct 2014 07:40:23 -0700 (PDT) Received: from xps13.localnet (136-92-190-109.dsl.ovh.fr. [109.190.92.136]) by mx.google.com with ESMTPSA id t16sm2042936wjr.44.2014.10.28.07.40.21 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Oct 2014 07:40:22 -0700 (PDT) From: Thomas Monjalon To: Yong Wang Date: Tue, 28 Oct 2014 15:40:04 +0100 Message-ID: <2785109.uTPxqbdWuM@xps13> Organization: 6WIND User-Agent: KMail/4.14.2 (Linux/3.17.1-1-ARCH; KDE/4.14.2; x86_64; ; ) In-Reply-To: References: <1413181389-14887-1-git-send-email-yongwang@vmware.com> <1c9ce28892d24052b2a3636507f9dba7@EX13-MBX-026.vmware.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Oct 2014 14:31:36 -0000 Hi Yong, Is there any progress with this patchset? Thanks -- Thomas 2014-10-22 07:07, Cao, Waterman: > Hi Yong, > > We verified your patch with VMWare ESXi 5.5 and found VMware L2fwd and L3fwd cmd can't run. > But We use DPDK1.7_rc1 package to validate VMware regression, It works fine. > . > 1.[Test Environment]: > - VMware ESXi 5.5; > - 2 VM > - FC20 on Host / FC20-64 on VM > - Crown Pass server (E2680 v2 ivy bridge ) > - Niantic 82599 > > 2. [Test Topology]: > Create 2VMs (Fedora 18, 64bit) . > We pass through one physical port(Niantic 82599) to each VM, and also create one virtual device: vmxnet3 in each VM. > To connect with two VMs, we use one vswitch to connect two vmxnet3 interface. > Then, PF1 and vmxnet3A are in VM1; PF2 and vmxnet3B are in VM2. > The traffic flow for l2fwd/l3fwd is as below:: > Ixia -> PF1 -> vmxnet3A -> vswitch -> vmxnet3B -> PF2 -> Ixia. (traffic generator) > > 3.[ Test Step]: > > tar dpdk1.8.rc1 ,compile and run; > > L2fwd: ./build/l2fwd -c f -n 4 -- -p 0x3 > L3fwd: ./build/l3fwd-vf -c 0x6 -n 4 -- -p 0x3 -config "(0,0,1),(1,0,2)" > > 4.[Error log]: > > ---VMware L2fwd:--- > > EAL: 0000:0b:00.0 not managed by UIO driver, skipping > EAL: PCI device 0000:13:00.0 on NUMA socket -1 > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > EAL: PCI memory mapped at 0x7f678ae6e000 > EAL: PCI memory mapped at 0x7f678af34000 > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5 > PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb > EAL: PCI device 0000:1b:00.0 on NUMA socket -1 > EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd > EAL: PCI memory mapped at 0x7f678af33000 > EAL: PCI memory mapped at 0x7f678af32000 > EAL: PCI memory mapped at 0x7f678af30000 > Lcore 0: RX port 0 > Lcore 1: RX port 1 > Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f670b0f5580 hw_ring=0x7f6789fe5280 dma_addr=0x373e5280 > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0. > PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32. > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f670b0f3480 hw_ring=0x7f671b820080 dma_addr=0x100020080 > PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path > PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled. > done: > Port 0, MAC address: 90:E2:BA:4A:33:78 > > Initializing port 1... EAL: Error - exiting with code: 1 > Cause: rte_eth_tx_queue_setup:err=-22, port=1 > > ---VMware L3fwd:--- > > EAL: TSC frequency is ~2793265 KHz > EAL: Master core 1 is ready (tid=9f49a880) > EAL: Core 2 is ready (tid=1d7f2700) > EAL: PCI device 0000:0b:00.0 on NUMA socket -1 > EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd > EAL: 0000:0b:00.0 not managed by UIO driver, skipping > EAL: PCI device 0000:13:00.0 on NUMA socket -1 > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > EAL: PCI memory mapped at 0x7f079f3e4000 > EAL: PCI memory mapped at 0x7f079f4aa000 > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5 > PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb > EAL: PCI device 0000:1b:00.0 on NUMA socket -1 > EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd > EAL: PCI memory mapped at 0x7f079f4a9000 > EAL: PCI memory mapped at 0x7f079f4a8000 > EAL: PCI memory mapped at 0x7f079f4a6000 > Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1... Address:90:E2:BA:4A:33:78, Allocated mbuf pool on socket 0 > LPM: Adding route 0x01010100 / 24 (0) > LPM: Adding route 0x02010100 / 24 (1) > LPM: Adding route 0x03010100 / 24 (2) > LPM: Adding route 0x04010100 / 24 (3) > LPM: Adding route 0x05010100 / 24 (4) > LPM: Adding route 0x06010100 / 24 (5) > LPM: Adding route 0x07010100 / 24 (6) > LPM: Adding route 0x08010100 / 24 (7) > txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f071f6f3c80 hw_ring=0x7f079e5e5280 dma_addr=0x373e5280 > PMD: ixgbe_dev_tx_queue_setup(): Using simple tx code path > PMD: ixgbe_dev_tx_queue_setup(): Vector tx enabled. > > Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1... Address:00:0C:29:F0:90:41, txq=1,0,0 EAL: Error - exiting with code: 1 > Cause: rte_eth_tx_queue_setup: err=-22, port=1 > > > Can you help to recheck this patch with latest DPDK code? > > Regards > Waterman > > -----Original Message----- > >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Yong Wang > >Sent: Wednesday, October 22, 2014 6:10 AM > >To: Patel, Rashmin N; Stephen Hemminger > >Cc: dev@dpdk.org > >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > > > >Rashmin/Stephen, > > > >Since you have worked on vmxnet3 pmd drivers, I wonder if you can help review this set of patches. Any other reviews/test verifications are welcome of course. We have reviewed/tested all patches internally. > > > >Yong > >________________________________________ > >From: dev on behalf of Yong Wang > >Sent: Monday, October 13, 2014 2:00 PM > >To: Thomas Monjalon > >Cc: dev@dpdk.org > >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > > > >Only the last one is performance related and it merely tries to give hints to the compiler to hopefully make branch prediction more efficient. It also moves a constant assignment out of the pkt polling loop. > > > >We did performance evaluation on a Nehalem box with 4cores@2.8GHz x 2 socket: > >On the DPDK-side, it's running some l3 forwarding apps in a VM on ESXi with one core assigned for polling. The client side is pktgen/dpdk, pumping 64B tcp packets at line rate. Before the patch, we are seeing ~900K PPS with 65% cpu of a core used for DPDK. After the patch, we are seeing the same pkt rate with only 45% of a core used. CPU usage is collected factoring our the idle loop cost. The packet rate is a result of the mode we used for vmxnet3 (pure emulation mode running default number of hypervisor contexts). I can add these info in the review request. > > > >Yong > >________________________________________ > >From: Thomas Monjalon > >Sent: Monday, October 13, 2014 1:29 PM > >To: Yong Wang > >Cc: dev@dpdk.org > >Subject: Re: [dpdk-dev] [PATCH 0/5] vmxnet3 pmd fixes/improvement > > > >Hi, > > > >2014-10-12 23:23, Yong Wang: > >> This patch series include various fixes and improvement to the > >> vmxnet3 pmd driver. > >> > >> Yong Wang (5): > >> vmxnet3: Fix VLAN Rx stripping > >> vmxnet3: Add VLAN Tx offload > >> vmxnet3: Fix dev stop/restart bug > >> vmxnet3: Add rx pkt check offloads > >> vmxnet3: Some perf improvement on the rx path > > > >Please, could describe what is the performance gain for these patches? > >Benchmark numbers would be appreciated. > > > >Thanks > >-- > >Thomas