DPDK patches and discussions
 help / color / mirror / Atom feed
From: Arnon Warshavsky <arnon@qwilt.com>
To: SwamZ <swamssrk@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Performance degradation with multiple ports
Date: Tue, 23 Feb 2016 08:27:03 +0200	[thread overview]
Message-ID: <CAKy9EB2HyNoPxX+T5BGec10x412ny986cSGfL_No5+_bEXKS1g@mail.gmail.com> (raw)
In-Reply-To: <CAMh0sicez3JAx9RvNLxSqfWi60xRr2hh_o5v1HnzOapi67h2gg@mail.gmail.com>

Hi Swamy

A somewhat similar degradation (though not with l2fwd)  was experienced by
us as described here
http://dev.dpdk.narkive.com/OL0KiHns/dpdk-dev-missing-prefetch-in-non-vector-rx-function
In our case it surfaced for not using the default configuration and working
in non-vector mode, and it behaved the same for both ixgbe and i40e.

/Arnon

On Tue, Feb 23, 2016 at 5:24 AM, SwamZ <swamssrk@gmail.com> wrote:

> Hi,
>
>  I am trying to find the maximum IO core performance with DPDK-2.2 code
> using l2fwd application. I got the following number in comparison with
> DPDK-1.7 code.
>
>
>                        One Port              Two ports
>
>  DPDK 2.2   14.86Mpps per port   11.8Mpps per port
>
>  DPDK 1.7   11.8Mpps per port     11.8Mpps per port
>
>
>
> Traffic rate from Router tester: 64bytes packet with 100% line rate
> (14.86Mpps per port)
>
> CPU Speed : 3.3GHz
>
> NIC           : 82599ES 10-Gigabit
>
> IO Virtualization: SR-IOV
>
> Command used: ./l2fwd -c 3 -w 0000:02:00.1 -w 0000:02:00.0 -- -p 3 -T 1
>
>
> Note:
>
>  - Both the ports are in same NUMA node. I got the same results with full
> CPU core as well as hyper-theraded core.
>
>  - PCIe speed is same for both the ports. Attached the lspci and other
> relevant output.
>
>  - In multiple port case, each core was receiving only 11.8Mpps. This means
> that RX is the bottleneck.
>
>
> Questions:
>
>  1) For two ports case, I am getting only 11.8Mpps per port compared to
> single port case, for which I got line rate. What could be the reason for
> this performance degradation? I was looking at the DPDK mail archive and
> found the following article similar to this and couldn’t conclude anything.
>
> http://dpdk.org/ml/archives/dev/2013-May/000115.html
>
>
>  2) Did anybody try this kind of performance test for i40E NIC?
>
>
> Thanks,
>
> Swamy
>

      reply	other threads:[~2016-02-23  6:27 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-23  3:24 SwamZ
2016-02-23  6:27 ` Arnon Warshavsky [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKy9EB2HyNoPxX+T5BGec10x412ny986cSGfL_No5+_bEXKS1g@mail.gmail.com \
    --to=arnon@qwilt.com \
    --cc=dev@dpdk.org \
    --cc=swamssrk@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).