DPDK usage discussions
 help / color / mirror / Atom feed
From: Yueyang Pan <yueyang.pan@epfl.ch>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [ICE DPDK 21.11] Latency bump at certain rate
Date: Tue, 21 Dec 2021 20:47:49 +0800	[thread overview]
Message-ID: <angelmatophylax-1640090869.910066.16395@mail.epfl.ch> (raw)

[-- Attachment #1: Type: text/plain, Size: 1987 bytes --]

Hi all,
	I was measuring the latency with the new Intel E810. I first used testpmd application with a single core and a single pair of queues and measured the latency on the generator side. The problem is that a latency bump occurs when the background traffic is above a certain threshold. I noticed that the threshold would move (at different rate of background traffic) depending on the speed of the recv and xmit function (i.e. bulk, SSE or AVX2)
	To identify where the bump occurs, I added hardware timestamp support to the application. I enabled rx hardware timestamp offload capability of E810, used rte_eth_timesync_read_time after rte_eth_rx_burst returns and rte_eth_timesync_read_tx_timestamp after rte_eth_tx_burst returns. I found the latency bump occurs between the packet arrives at the PHY Core and rte_eth_tx_burst returns. I also measures the CPU cycles before rte_eth_rx_burst is called and rte_eth_tx_burst returns in the user space. The gap in CPU cycles is stable regardless of the background traffic. This means the bump resides between the packet arrives the NIC and the packet is extracted from the main memory via rte_eth_rx_burst.
	Meanwhile I failed to find any DPDK latency report from Intel nor  mails from those who might experience the same problem. Does anyone meet the same problem and probably know what happens between the packet is in the PHY Core and the packet is in the memory? Maybe Intel Validation Team?
	I guess it may relate to packet discarding logic in the firmware or the DMA process. I saw this issue on different servers and different versions of firmware or DDP as well.

Configuration of the server:
CPU: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
RAM: DDR4 11 x 32 GB, 2933 MHz, 6 Channels
OS: Ubuntu 20.04.2 LTS
Kernel: 5.4.0-89-generic
Ice kernel driver version: 1.6.7
OS default DDP version: 1.3.26
Firmware version: 3.0
Traffic generator: MoonGen with two Mellanox ConnectX-5 EN 100G NICs

	Best Wishes
	Pan




[-- Attachment #2: Type: text/html, Size: 4562 bytes --]

             reply	other threads:[~2021-12-22 10:01 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-21 12:47 Yueyang Pan [this message]
2022-01-07 21:48 Anantharam, Arjun
2022-01-11  4:40 ` neeraj.sj

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=angelmatophylax-1640090869.910066.16395@mail.epfl.ch \
    --to=yueyang.pan@epfl.ch \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).