From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f43.google.com (mail-qg0-f43.google.com [209.85.192.43]) by dpdk.org (Postfix) with ESMTP id 9B17CAF80 for ; Tue, 8 Jul 2014 01:48:34 +0200 (CEST) Received: by mail-qg0-f43.google.com with SMTP id z60so4312882qgd.2 for ; Mon, 07 Jul 2014 16:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=pVmOsS6ptu2y8i5mU/PIZJ3K6WkjpoMzMCs/M5xGP2c=; b=wRRFgkUGD/mU1wsD2sdBnGxyXT9xD9Jgb1PZQt1UnC5cccqWO1NxQbDwns3JdgrYYI WLL5S9XYOgrUY925vNid/AgaSu5KSU0kR4gXNAnqv7tXzxKlCAWFetuYaK84FznC3xyZ iN50hBDt5Z3hs0p1LmyIJMMUfrIxHAmmO2p1YQl4GfSUBnmfK/hcMFNAd5XK5NJ/GSzW Xzl3AcqqFMRS1Kjn2Vqwsr6NBv0S2YfCcqyfFzeU0aISdSj1BQzf3kC05Ou637Nxu0q3 84nLixNZbWCg9YcA1RgWTCHp3Z0Sz4cRMHT+iAFasoSau8faBZUtjWWWFKREXXskACpA owhw== MIME-Version: 1.0 X-Received: by 10.140.50.143 with SMTP id s15mr52898925qga.36.1404776936081; Mon, 07 Jul 2014 16:48:56 -0700 (PDT) Sender: hyunseok.chang@gmail.com Received: by 10.140.85.71 with HTTP; Mon, 7 Jul 2014 16:48:56 -0700 (PDT) Received: by 10.140.85.71 with HTTP; Mon, 7 Jul 2014 16:48:56 -0700 (PDT) In-Reply-To: References: Date: Mon, 7 Jul 2014 19:48:56 -0400 X-Google-Sender-Auth: cRDjfyQqostW0-1VbGkWeyvyZFQ Message-ID: From: Hyunseok To: "Patel, Rashmin N" , dev@dpdk.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Performance issue with vmxnet3 pmd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: hyunseok@ieee.org List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2014 23:48:35 -0000 Thanks for your response. I am actually more interested in stock (non-dpdk) vmxnet3 driver vs. vmxnet3 pmd driver comparison. When I forward pkts from stock vmxnet3 driver, I am able to achieve much higher throughput than with vmxnet3 pmd. To make comparison fair, I did not leverage gro/gso. Does any of the overheads you mentioned play a role in this comparison? Here I am comparing different drivers for the same vmxnet3 interface... Regards, Hyunseok On Jul 7, 2014 7:03 PM, "Patel, Rashmin N" wrote: > Hi Hyunseok, > > We should not compare Vmxnet3-PMD with ixgbe-PMD performance as Vmxnet3 > device is a para-virtual device and it's not similar to directly assigned > device to a VM either. > There is VMEXIT/VMEXIT occurrence at burst-size boundary and that overhea= d > can=E2=80=99t be eliminated unless the design of Vmxnet3 is updated in fu= ture. In > addition to that the packets is being touched in ESXi hypervisor vSwitch > layer between physical NIC and a virtual machine, which introduces extra > overhead, which you won't have in case of using Niantic being used native= ly > or passed through Vt-d to a virtual machine. > > Feature wise, we can compare it to Virtio-PMD solution, but again there i= s > a little different in device handling and backend driver support compared > to Vmxnet3 device so performance comparison won=E2=80=99t to apple to app= le. > > Thanks, > Rashmin > > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hyunseok > Sent: Monday, July 07, 2014 3:22 PM > To: dev@dpdk.org > Subject: [dpdk-dev] Performance issue with vmxnet3 pmd > > Hi, > > I was testing l2-fwd with vmxnet3 pmd (included in dpdk). > > The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 > to 2.8 Gbps. > > This is in contrast with ixgbe pmd with which I could easily achieve 10 > gbps forwarding rate. > > With the original vmxnet3 driver (non pmd), I could also achieve close to > 10 gpbs with multiple iperf. But I can never achieve that rate with > vmxnet3 pmd... > > So basically vmxnet3 pmd doesn't seem that fast. Is this a known issue? > > Thanks, > -Hyunseok >