DPDK usage discussions
 help / color / mirror / Atom feed
From: Shahaf Shuler <shahafs@mellanox.com>
To: Shihabur Rahman Chowdhury <shihab.buet@gmail.com>,
	Dave Wallace <dwallacelf@gmail.com>,
	Olga Shern <olgas@mellanox.com>,
	Adrien Mazarguil <adrien.mazarguil@6wind.com>
Cc: "Wiles, Keith" <keith.wiles@intel.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Low Rx throughput when using Mellanox ConnectX-3 card with DPDK
Date: Thu, 13 Apr 2017 05:19:55 +0000	[thread overview]
Message-ID: <AM4PR05MB1505FB2BCC0B22B809153A23C3020@AM4PR05MB1505.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <CAMGVCn4vrY9jHJMSWioD-ARUgKetjT9mNbAWUfKQJvKJ8d=mqA@mail.gmail.com>

Thursday, April 13, 2017 4:58 AM, Shihabur Rahman Chowdhury:
[...]
> >>>
> >>>> setup
> >>>>
> >>>> - 2 machines with 2xIntel Xeon E5-2620 CPUs
> >>>> - Each machine with a Mellanox single port 10G ConnectX3 card
> >>>> - Mellanox DPDK version 16.11
> >>>> - Mellanox OFED 4.0-2.0.0.1 and latest firmware for ConnectX3
> >>>>
> >>>> The application is doing almost nothing. It is reading a batch of 64
> >>>> packets from a single rxq, swapping the mac of each packet and writing
> >>>> it
> >>>> back to a single txq. The rx and tx is being handled by separate lcores

Why did you choose such configuration? 
Such configuration may cause high overhead in snoop cycles, as the first cache line of the packet
Will first be on the Rx lcore and then it will need to be invalidated when the Tx lcore swaps the macs. 

Since you are using 2 cores anyway, have you tried that each core will do both Rx and Tx (run to completion)?

> >>>>
> >>> on
> >>>
> >>>> the same NUMA socket. We are running pktgen on another machine.
> With 64B
> >>>> sized packets we are seeing ~14.8Mpps Tx rate and ~7.3Mpps Rx rate in
> >>>> pktgen. We checked the NIC on the machine running the DPDK
> application
> >>>> (with ifconfig) and it looks like there is a large number of packets
> >>>>
> >>> being
> >>>
> >>>> dropped by the interface. 

This might be because the scenario is SW bound, when the application don't process the packets fast enough the NIC must drop the ingress.

>>>>>Our connectx3 card should be theoretically
> be
> >>>> able to handle 10Gbps Rx + 10Gbps Tx throughput (with channel width
> 4,
> >>>>
> >>> the
> >>>
> >>>> theoretical max on PCIe 3.0 should be ~31.2Gbps). Interestingly, when
> Tx
> >>>> rate is reduced in pktgent (to ~9Mpps), the Rx rate increases to
> ~9Mpps.
> >>>>
> >>> Not sure what is going on here, when you drop the rate to 9Mpps I
> assume
> >>> you stop getting missed frames.
> >>> Do you have flow control enabled?
> >>>
> >>> On the pktgen side are you seeing missed RX packets?
> >>> Did you loopback the cable from pktgen machine to the other port on
> the
> >>> pktgen machine and did you get the same Rx/Tx performance in that
> >>> configuration?
> >>>
> >>> We would highly appriciate if we could get some pointers as to what can
> >>>>
> >>> be
> >>>
> >>>> possibly causing this mismatch in Rx and Tx. Ideally, we should be able
> >>>>
> >>> to
> >>>
> >>>> see ~14Mpps Rx well. Is it because we are using a single port? Or

Our "Hero number" for testpmd application which do i/o forwarding with ConnectX-3 is ~10Mpps for single core.
Dual core should reach ~14Mpps.

> >>>>
> >>> something
> >>>
> >>>> else?
> >>>>
> >>>> FYI, we also ran the sample l2fwd application and test-pmd and got
> >>>> comparable results in the same setup.
> >>>>
> >>>> Thanks
> >>>> Shihab
> >>>>
> >>> Regards,
> >>> Keith
> >>>
> >>>
> >>>
> >

  reply	other threads:[~2017-04-13  5:19 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-12 21:00 Shihabur Rahman Chowdhury
2017-04-12 22:41 ` Wiles, Keith
2017-04-13  0:06   ` Shihabur Rahman Chowdhury
2017-04-13  1:56     ` Dave Wallace
2017-04-13  1:57       ` Shihabur Rahman Chowdhury
2017-04-13  5:19         ` Shahaf Shuler [this message]
2017-04-13 14:21           ` Shihabur Rahman Chowdhury
2017-04-13 15:49             ` Kyle Larose
2017-04-17 17:43               ` Shihabur Rahman Chowdhury
2017-04-13 13:49     ` Wiles, Keith
2017-04-13 14:22       ` Shihabur Rahman Chowdhury
2017-04-13 14:47         ` Wiles, Keith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM4PR05MB1505FB2BCC0B22B809153A23C3020@AM4PR05MB1505.eurprd05.prod.outlook.com \
    --to=shahafs@mellanox.com \
    --cc=adrien.mazarguil@6wind.com \
    --cc=dwallacelf@gmail.com \
    --cc=keith.wiles@intel.com \
    --cc=olgas@mellanox.com \
    --cc=shihab.buet@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).