DPDK usage discussions
 help / color / mirror / Atom feed
* [ICE DPDK 21.11] Latency bump at certain rate
@ 2021-12-21 12:47 Yueyang Pan
  0 siblings, 0 replies; 3+ messages in thread
From: Yueyang Pan @ 2021-12-21 12:47 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 1987 bytes --]

Hi all,
	I was measuring the latency with the new Intel E810. I first used testpmd application with a single core and a single pair of queues and measured the latency on the generator side. The problem is that a latency bump occurs when the background traffic is above a certain threshold. I noticed that the threshold would move (at different rate of background traffic) depending on the speed of the recv and xmit function (i.e. bulk, SSE or AVX2)
	To identify where the bump occurs, I added hardware timestamp support to the application. I enabled rx hardware timestamp offload capability of E810, used rte_eth_timesync_read_time after rte_eth_rx_burst returns and rte_eth_timesync_read_tx_timestamp after rte_eth_tx_burst returns. I found the latency bump occurs between the packet arrives at the PHY Core and rte_eth_tx_burst returns. I also measures the CPU cycles before rte_eth_rx_burst is called and rte_eth_tx_burst returns in the user space. The gap in CPU cycles is stable regardless of the background traffic. This means the bump resides between the packet arrives the NIC and the packet is extracted from the main memory via rte_eth_rx_burst.
	Meanwhile I failed to find any DPDK latency report from Intel nor  mails from those who might experience the same problem. Does anyone meet the same problem and probably know what happens between the packet is in the PHY Core and the packet is in the memory? Maybe Intel Validation Team?
	I guess it may relate to packet discarding logic in the firmware or the DMA process. I saw this issue on different servers and different versions of firmware or DDP as well.

Configuration of the server:
CPU: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
RAM: DDR4 11 x 32 GB, 2933 MHz, 6 Channels
OS: Ubuntu 20.04.2 LTS
Kernel: 5.4.0-89-generic
Ice kernel driver version: 1.6.7
OS default DDP version: 1.3.26
Firmware version: 3.0
Traffic generator: MoonGen with two Mellanox ConnectX-5 EN 100G NICs

	Best Wishes
	Pan




[-- Attachment #2: Type: text/html, Size: 4562 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ICE DPDK 21.11] Latency bump at certain rate
  2022-01-07 21:48 Anantharam, Arjun
@ 2022-01-11  4:40 ` neeraj.sj
  0 siblings, 0 replies; 3+ messages in thread
From: neeraj.sj @ 2022-01-11  4:40 UTC (permalink / raw)
  To: Anantharam, Arjun; +Cc: users

On 2022-01-07 21:48, Anantharam, Arjun wrote:
> Hi,
> 
> I believe the suggestion here would be to try the new
> "rx_low_latency=1" option enabled with DPDK 21.11:
> 
>  ·        Low Rx latency (default 0)
> 
>  vRAN workloads require low latency DPDK interface for the front haul
> interface connection to Radio. By specifying 1 for parameter
> rx_low_latency, each completed Rx descriptor can be written
> immediately to host memory and the Rx interrupt latency can be reduced
> to 2us:
> 
> -a 0000:88:00.0,rx_low_latency=1
> 
>  As a trade-off, this configuration may cause the packet processing
> performance degradation due to the PCI bandwidth limitation.
> 
> If they are using an old DPDK version, they can try to backport below
> patch
> 
> http://patchwork.dpdk.org/project/dpdk/patch/20210924093429.12068-1-alvinx.zhang@intel.com/
> 
> 
> Thanks
Hi,
We are concerned about dropping the traffic, by limiting its rate to 22 
to 24 gbps, rather than allowing complete 40gbps, a formula for cir, 
pir, ebs and cbs would be more appreciated.
Thanks.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ICE DPDK 21.11] Latency bump at certain rate
@ 2022-01-07 21:48 Anantharam, Arjun
  2022-01-11  4:40 ` neeraj.sj
  0 siblings, 1 reply; 3+ messages in thread
From: Anantharam, Arjun @ 2022-01-07 21:48 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 769 bytes --]

Hi,

I believe the suggestion here would be to try the new "rx_low_latency=1" option enabled with DPDK 21.11:

*        Low Rx latency (default 0)
vRAN workloads require low latency DPDK interface for the front haul interface connection to Radio. By specifying 1 for parameter rx_low_latency, each completed Rx descriptor can be written immediately to host memory and the Rx interrupt latency can be reduced to 2us:
-a 0000:88:00.0,rx_low_latency=1
As a trade-off, this configuration may cause the packet processing performance degradation due to the PCI bandwidth limitation.


If they are using an old DPDK version, they can try to backport below patch
http://patchwork.dpdk.org/project/dpdk/patch/20210924093429.12068-1-alvinx.zhang@intel.com/

Thanks

[-- Attachment #2: Type: text/html, Size: 7581 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-01-11  4:40 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-21 12:47 [ICE DPDK 21.11] Latency bump at certain rate Yueyang Pan
2022-01-07 21:48 Anantharam, Arjun
2022-01-11  4:40 ` neeraj.sj

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git