DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Gray, Mark D" <mark.d.gray@intel.com>
To: Jun Xiao <jun.xiao@cloudnetengine.com>
Cc: dev <dev@dpdk.org>, discuss <discuss@openvswitch.org>
Subject: Re: [dpdk-dev] [ovs-discuss] vswitches performance comparison
Date: Wed, 22 Jul 2015 08:06:36 +0000	[thread overview]
Message-ID: <738D45BC1F695740A983F43CFE1B7EA92E2BF8CC@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <9D0E6ED2-6171-4EF5-AD21-01B1844B5136@cloudnetengine.com>

> >>
> >> I'd like to hope that's my methodology problem, but I just follow the
> >> installation guide without any customization.
> >>
> >> Hi Mark, do you have any performance data share with us? Maybe we are
> >> using different type of workloads, like I mentioned I am using
> >> typical data center workload, I guess you are talking about NFV type of
> workload?
> >
> > The number getting floated around on the mailing list recently is
> > 16.5Mpps for phy-phy. However, I don't think we have any iperf data
> > off-hand for your usecase. When we test throughput into the vm we
> > usually generate the traffic externally and send NIC->OVS->VM->OVS-
> >NIC. This is a little different to your setup.
> >
> 
> I guess pmd driver is used inside VM in that case, right?

Yes, but even when we use virtio-net we see the same if not *slightly* better
performance.

> > I do know, however, that ovs-dpdk typically has a much larger
> > throughput than the kernel space datapath.
> >
> 
> I'd like to say it depends on workloads, for small/medium packet size
> workload, that's definitely true, while for TSO size workload, it's not that
> obvious (or worse) as data path overheads are amortized and H/W can be
> leveraged.

For large packets the switch will eventually saturate the NIC at line rate but the
total aggregate throughput of the switch should be faster (you could
add more interfaces for example to take advantage of that). 

TSO is missing from the DPDK ports at the moment but it is something
we plan to look at. We are currently enabling Jumbo frames (which don't
work at the moment).

> > Have you seen this?
> > https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_
> > use_cases
> >
> 
> Thanks for the pointer, I'll try later.
> >>
> >> Thanks,
> >> Jun

  reply	other threads:[~2015-07-22  8:06 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-21 18:00 [dpdk-dev] " Jun Xiao
2015-07-21 18:14 ` [dpdk-dev] [ovs-discuss] " Gray, Mark D
2015-07-21 18:28   ` Jun Xiao
2015-07-21 18:36     ` Gray, Mark D
2015-07-21 18:48       ` Jun Xiao
2015-07-22  8:06         ` Gray, Mark D [this message]
2015-07-21 21:02 ` [dpdk-dev] " Stephen Hemminger
2015-07-22  8:07   ` Gray, Mark D

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=738D45BC1F695740A983F43CFE1B7EA92E2BF8CC@IRSMSX108.ger.corp.intel.com \
    --to=mark.d.gray@intel.com \
    --cc=dev@dpdk.org \
    --cc=discuss@openvswitch.org \
    --cc=jun.xiao@cloudnetengine.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).