DPDK patches and discussions
 help / color / mirror / Atom feed
From: Hyunseok <hyunseok@ieee.org>
To: "Patel, Rashmin N" <rashmin.n.patel@intel.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] Performance issue with vmxnet3 pmd
Date: Mon, 7 Jul 2014 19:48:56 -0400	[thread overview]
Message-ID: <CAAJNysLL_G=LMDEgJTPWgjo-HB_UnxgZDmcT2THNUCyAnnfS8Q@mail.gmail.com> (raw)
In-Reply-To: <C68F1134885B32458704E1E4DA3E34F3419B60D3@FMSMSX105.amr.corp.intel.com>

Thanks for your response.

I am actually more interested in stock (non-dpdk) vmxnet3 driver vs.
vmxnet3 pmd driver comparison.

When I forward pkts from stock vmxnet3 driver, I am able to achieve much
higher throughput than with vmxnet3 pmd.  To make comparison fair, I did
not leverage gro/gso.

Does any of the overheads you mentioned play a role in this comparison?
Here I am comparing different drivers for the same vmxnet3 interface...

Regards,
Hyunseok
On Jul 7, 2014 7:03 PM, "Patel, Rashmin N" <rashmin.n.patel@intel.com>
wrote:

> Hi Hyunseok,
>
> We should not compare Vmxnet3-PMD with ixgbe-PMD performance as Vmxnet3
> device is a para-virtual device and it's not similar to directly assigned
> device to a VM either.
> There is VMEXIT/VMEXIT occurrence at burst-size boundary and that overhead
> can’t be eliminated unless the design of Vmxnet3 is updated in future. In
> addition to that the packets is being touched in ESXi hypervisor vSwitch
> layer between physical NIC and a virtual machine, which introduces extra
> overhead, which you won't have in case of using Niantic being used natively
> or passed through Vt-d to a virtual machine.
>
> Feature wise, we can compare it to Virtio-PMD solution, but again there is
> a little different in device handling and backend driver support compared
> to Vmxnet3 device so performance comparison won’t to apple to apple.
>
> Thanks,
> Rashmin
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hyunseok
> Sent: Monday, July 07, 2014 3:22 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Performance issue with vmxnet3 pmd
>
> Hi,
>
> I was testing l2-fwd with vmxnet3 pmd (included in dpdk).
>
> The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5
> to 2.8 Gbps.
>
> This is in contrast with ixgbe pmd with which I could easily achieve 10
> gbps forwarding rate.
>
> With the original vmxnet3 driver (non pmd), I could also achieve close to
> 10 gpbs with multiple iperf.   But I can never achieve that rate with
> vmxnet3 pmd...
>
> So basically vmxnet3 pmd doesn't seem that fast.  Is this a known issue?
>
> Thanks,
> -Hyunseok
>

  reply	other threads:[~2014-07-07 23:48 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-07 22:22 Hyunseok
2014-07-07 23:03 ` Patel, Rashmin N
2014-07-07 23:48   ` Hyunseok [this message]
2014-07-08  0:07     ` Patel, Rashmin N
2014-07-08  7:05 ` Thomas Monjalon
2014-07-08 15:08   ` Hyunseok
2014-08-13  8:13     ` Alex Markuze

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAJNysLL_G=LMDEgJTPWgjo-HB_UnxgZDmcT2THNUCyAnnfS8Q@mail.gmail.com' \
    --to=hyunseok@ieee.org \
    --cc=dev@dpdk.org \
    --cc=rashmin.n.patel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).