DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] dpdk packet loss,delay, retransmission
@ 2019-07-21  9:41 Anupama Laxmi
  2019-07-22 16:17 ` Stephen Hemminger
  0 siblings, 1 reply; 2+ messages in thread
From: Anupama Laxmi @ 2019-07-21  9:41 UTC (permalink / raw)
  To: users

I see delay in TCP packet transfer while transferring huge files using
SSH/SCP. The delay is not seen with file sizes up to 35 MB. Beyond 35 MB
there is delay due to TCP out of order /retransmission and packet loss. For
file size 762835531 earlier it took 8 seconds for scp transfer now it takes
~~ 4 mins after DPDK upgrade. I suspected Small RX and TX queue size on
NICs which are connected to DPDK is causing packet drops.

   1. Increased TX queue and RX queue size as follows.
   RTE_TEST_RX_DESC_DEFAULT 128 RTE_TEST_TX_DESC_DEFAULT 512

to

RTE_TEST_RX_DESC_DEFAULT 4096 RTE_TEST_TX_DESC_DEFAULT 4096

Saw little improvement. The time taken for scp reduced to 2mins and 30
seconds for file of size 762835531.

   1. Tried to rate limit the TX rate using: rte_eth_set_queue_rate_limit
   API

Not seeing much improvement.

Please suggest ways to configure the mbuf,queue size to optimize the
performance and reduce the delay and drop in packet transfer.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] dpdk packet loss,delay, retransmission
  2019-07-21  9:41 [dpdk-users] dpdk packet loss,delay, retransmission Anupama Laxmi
@ 2019-07-22 16:17 ` Stephen Hemminger
  0 siblings, 0 replies; 2+ messages in thread
From: Stephen Hemminger @ 2019-07-22 16:17 UTC (permalink / raw)
  To: Anupama Laxmi; +Cc: users

On Sun, 21 Jul 2019 15:11:34 +0530
Anupama Laxmi <anupamalaxmi4@gmail.com> wrote:

> I see delay in TCP packet transfer while transferring huge files using
> SSH/SCP. The delay is not seen with file sizes up to 35 MB. Beyond 35 MB
> there is delay due to TCP out of order /retransmission and packet loss. For
> file size 762835531 earlier it took 8 seconds for scp transfer now it takes
> ~~ 4 mins after DPDK upgrade. I suspected Small RX and TX queue size on
> NICs which are connected to DPDK is causing packet drops.
> 
>    1. Increased TX queue and RX queue size as follows.
>    RTE_TEST_RX_DESC_DEFAULT 128 RTE_TEST_TX_DESC_DEFAULT 512
> 
> to
> 
> RTE_TEST_RX_DESC_DEFAULT 4096 RTE_TEST_TX_DESC_DEFAULT 4096
> 
> Saw little improvement. The time taken for scp reduced to 2mins and 30
> seconds for file of size 762835531.
> 
>    1. Tried to rate limit the TX rate using: rte_eth_set_queue_rate_limit
>    API
> 
> Not seeing much improvement.
> 
> Please suggest ways to configure the mbuf,queue size to optimize the
> performance and reduce the delay and drop in packet transfer.

Welcome to the world of buffer bloat.

The root cause is that large receive and transmit queues give worse performance
in the real world.  The optimum queue size is actually smaller than you think.
The DPDK examples are all badly tuned for real life, they encourage bufferbloat.
The values are to allow UDP  packet benchmarks to be fast.

What is your packet size and data rate?
You should aim for < 1ms of buffering.

DPDK could do better by implementing something like Byte Queue Limits in
Linux and/or CoDel. But this requires a software queueing layer on top of
the hardware.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-21  9:41 [dpdk-users] dpdk packet loss,delay, retransmission Anupama Laxmi
2019-07-22 16:17 ` Stephen Hemminger

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox