From: Dave Wallace <dwallacelf@gmail.com>
To: Shihabur Rahman Chowdhury <shihab.buet@gmail.com>,
"Wiles, Keith" <keith.wiles@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK
Date: Wed, 12 Apr 2017 21:56:02 -0400 [thread overview]
Message-ID: <5550cab3-aeba-ddb4-63e4-3821f91f0ebe@gmail.com> (raw)
In-Reply-To: <CAMGVCn5JCT3AKbNqeJPmZ2MHtgyoSYZcE_EEnyEyGM+uiCK2Tw@mail.gmail.com>
I have encountered a similar issue in the past on a system configuration
where the PCI interface to the NIC was on the other NUMA node.
Something else to check...
-daw-
On 04/12/2017 08:06 PM, Shihabur Rahman Chowdhury wrote:
> We've disabled the pause frames. That also disables flow control I assume.
> Correct me if I am wrong.
>
> On the pktgen side, the dropped field and overrun field for Rx keeps
> increasing for ifconfig. Btw, both overrun and dropped fields have the
> exact same value always.
>
> Our ConnectX3 NIC has just a single 10G port. So there is no way to create
> such loopback connection.
>
> Thanks
>
> Shihabur Rahman Chowdhury
> David R. Cheriton School of Computer Science
> University of Waterloo
>
>
>
> On Wed, Apr 12, 2017 at 6:41 PM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
>>> On Apr 12, 2017, at 4:00 PM, Shihabur Rahman Chowdhury <
>> shihab.buet@gmail.com> wrote:
>>> Hello,
>>>
>>> We are running a simple DPDK application and observing quite low
>>> throughput. We are currently testing a DPDK application with the
>> following
>>> setup
>>>
>>> - 2 machines with 2xIntel Xeon E5-2620 CPUs
>>> - Each machine with a Mellanox single port 10G ConnectX3 card
>>> - Mellanox DPDK version 16.11
>>> - Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3
>>>
>>> The application is doing almost nothing. It is reading a batch of 64
>>> packets from a single rxq, swapping the mac of each packet and writing it
>>> back to a single txq. The rx and tx is being handled by separate lcores
>> on
>>> the same NUMA socket. We are running pktgen on another machine. With 64B
>>> sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in
>>> pktgen. We checked the NIC on the machine running the DPDK application
>>> (with ifconfig) and it looks like there is a large number of packets
>> being
>>> dropped by the interface. Our connectx3 card should be theoretically be
>>> able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width 4,
>> the
>>> theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when Tx
>>> rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to ~9Mpps.
>> Not sure what is going on here, when you drop the rate to 9Mpps I assume
>> you stop getting missed frames.
>> Do you have flow control enabled?
>>
>> On the pktgen side are you seeing missed RX packets?
>> Did you loopback the cable from pktgen machine to the other port on the
>> pktgen machine and did you get the same Rx/Tx performance in that
>> configuration?
>>
>>> We would highly appriciate if we could get some pointers as to what can
>> be
>>> possibly causing this mismatch in Rx and Tx. Ideally, we should be able
>> to
>>> see ~14Mpps Rx well. Is it because we are using a single port? Or
>> something
>>> else?
>>>
>>> FYI, we also ran the sample l2fwd application and test-pmd and got
>>> comparable results in the same setup.
>>>
>>> Thanks
>>> Shihab
>> Regards,
>> Keith
>>
>>
next prev parent reply other threads:[~2017-04-13 1:56 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-12 21:00 Shihabur Rahman Chowdhury
2017-04-12 22:41 ` Wiles, Keith
2017-04-13 0:06 ` Shihabur Rahman Chowdhury
2017-04-13 1:56 ` Dave Wallace [this message]
2017-04-13 1:57 ` Shihabur Rahman Chowdhury
2017-04-13 5:19 ` Shahaf Shuler
2017-04-13 14:21 ` Shihabur Rahman Chowdhury
2017-04-13 15:49 ` Kyle Larose
2017-04-17 17:43 ` Shihabur Rahman Chowdhury
2017-04-13 13:49 ` Wiles, Keith
2017-04-13 14:22 ` Shihabur Rahman Chowdhury
2017-04-13 14:47 ` Wiles, Keith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5550cab3-aeba-ddb4-63e4-3821f91f0ebe@gmail.com \
--to=dwallacelf@gmail.com \
--cc=keith.wiles@intel.com \
--cc=shihab.buet@gmail.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).