DPDK usage discussions
 help / color / mirror / Atom feed
From: Peter Hynes <p.hynes@titanicsystems.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] Lossless connection with Mellanox ConnectX-3 NIC
Date: Fri, 15 Jan 2016 12:59:36 +0000	[thread overview]
Message-ID: <DB5PR06MB14328D547FA179DC99E23AAF98CD0@DB5PR06MB1432.eurprd06.prod.outlook.com> (raw)

Hi all,

I'm hoping that someone can help me with a question with regards to using Mellanox NICs in DPDK.

Is it possible to guarantee lossless Ethernet traffic reception on a 40G Mellanox ConnectX-3 NIC with flow control enabled? When flow control is enabled and the receive side is under heavy load I see the vport_rx_dropped statistic increment.

I have a system which uses DPDK to receive packets from a Mellanox ConnectX-3 NIC. The system then does some processing on the packets. For demonstration purposes I intend to connect the NIC to an external packet source. At the minute I have a Mellanox NIC also at the packet source side. I'd like to ensure that all packets which are transmitted from the packet source are received in DPDK. 

Using 1024 byte packets, I can achieve 40 Gbps without any dropped packets if all I do is receive and free the packets. With heavy processing, the rate I can receive packets will drop. I queue received packets and only request new packets when there is space on the queue.

In one test I am doing I can process packets at a rate of about 4 Gbps due to heavy processing. With flow control enabled, I see pause frames being generated (using ethtool). The packet source rate drops to around 16 Gbps (receive rate at DPDK is 4Gbps). The rx_dropped statistic reads  0, but vport_rx_dropped increments rapidly.

With flow control disabled, the packet source rate stays at 40G and both rx_dropped and vport_rx_dropped increase.

It is clear that flow control is active and the NIC is receiving packets. However, some packets get dropped before being sent to DPDK.

The Mellanox documentation describes vport_rx_dropped as:
"Received packets discarded due to lack of software receive buffers (WQEs). Important indication to whether RX completion routines are keeping up with hardware ingress packet rate"

Using Intel XL710 NICs (and i40e drivers) I can achieve lossless packet reception using the same code. However, in the proposed system we'd like to use Mellanox NICs.

I appreciate that the Mellanox cards have an extra layer between the hardware and DPDK i.e. the Mellanox OFED.

I am using the latest firmware and have tried with software built on DPDK 2.1.0 and 2.2.0. I've tried changing the receive buffer lengths in DPDK and set the rx/tx ring parameters to the maximum using ethtool. 

Kind regards,

Peter

 

                 reply	other threads:[~2016-01-15 12:59 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB5PR06MB14328D547FA179DC99E23AAF98CD0@DB5PR06MB1432.eurprd06.prod.outlook.com \
    --to=p.hynes@titanicsystems.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).