DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] DPDK delaying individual packets infinitely
@ 2013-12-06 13:22 Dmitry Vyal
  2014-01-02 13:59 ` Thomas Monjalon
  2014-01-02 16:43 ` Stephen Hemminger
  0 siblings, 2 replies; 3+ messages in thread
From: Dmitry Vyal @ 2013-12-06 13:22 UTC (permalink / raw)
  To: dev

Hello list,

For some time I've been writing a custom packet generator coupled with a 
packet receiver using DPDK.

It works greatly when used for generating  millions of packets, but 
unexpectedly It has troubles generating a several dozens of packets or so.

The application consists of two threads, first one sends the packets 
from one port and second receives them on another. These are 2 ports of 
four-ports 1Gb NIC. It's detected as Intel Corporation 82576 Gigabit 
Network Connection (rev 01). Ports are connected with a patch cord.

My experiment runs like the following:
After initializing NICs both generator and receiver wait for 1 second.
Generator sends N packets by calling rte_eth_tx_burst for each 
individual packet. It waits for 100000000 cpu ticks between bursts. 
rte_eth_tx_burst reports all packets are sent.
Receiver repeats calling  rte_eth_rx_burst and waits for 50000000 ticks 
if the function returns zero. After generators sends all the packets, 
the receiver continues polling for several seconds.

I'm observing the following behavior:

If N is small, say 20, than no packets a received. See the logs. 
Receiver prints Z when it got zero packets and R if it got at least one 
packet.

Starting experiment
receiver 0 started main loop
generator 0 sitting on socket 0 is waiting for experiment start
generator 0 started main loop
Zgenerating mbuf for file 0, port 0, addr 0 on socket 0
free_count on pool 0 = 100000000
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 1 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 2 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 3 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 4 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 5 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 6 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 7 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 8 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 9 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 10 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 11 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 12 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 13 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 14 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 15 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 16 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 17 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 18 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 19 on socket 0
sent     1 in queue 0 freed     0
Stopping after reaching 20 packets limit
ZZwaiting for receivers to 
stopZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ********* 
Statistics:
Seconds elapsed: 6.123671


If I made N bigger, say 25, than all of a sudden some packets a 
received, like so:


Starting experiment
receiver 0 started main loop
generator 0 sitting on socket 0 is waiting for experiment start
generator 0 started main loop
generating mbuf for file 0, port 0, addr 0 on socket 0
Zfree_count on pool 0 = 100000000
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 1 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 2 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 3 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 4 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 5 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 6 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 7 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 8 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 9 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 10 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 11 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 12 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 13 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 14 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 15 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 16 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 17 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 18 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 19 on socket 0
sent     1 in queue 0 freed     0
ZZgenerating mbuf for file 0, port 0, addr 20 on socket 0
sent     1 in queue 0 freed     0
Rtotal received packets on queue 0: 21; received: 21; zero_iters: 41
Zgenerating mbuf for file 0, port 0, addr 21 on socket 0
free_count on pool 0 = 6113
sent     1 in queue 0 freed     0
Rtotal received packets on queue 0: 22; received: 1; zero_iters: 1
Zgenerating mbuf for file 0, port 0, addr 22 on socket 0
sent     1 in queue 0 freed     0
Rtotal received packets on queue 0: 23; received: 1; zero_iters: 1
Zgenerating mbuf for file 0, port 0, addr 23 on socket 0
sent     1 in queue 0 freed     0
Rtotal received packets on queue 0: 24; received: 1; zero_iters: 1
Zgenerating mbuf for file 0, port 0, addr 24 on socket 0
sent     1 in queue 0 freed     0
Stopping after reaching 25 packets limit
Rtotal received packets on queue 0: 25; received: 1; zero_iters: 1
Zwaiting for receivers to 
stopZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ********* 
Statistics:
Seconds elapsed: 6.293315


So looks like DPDK is buffering first 21 packets and doesn't flush the 
buffer in the first run even if I wait for several seconds after last 
packet is sent. In second run we see it forwards packets one by one 
after first burst of size 21.

I tried on DPDK-1.3.1 and DPDK-1.5.1 observing similar results. Is it 
expected behavior?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] DPDK delaying individual packets infinitely
  2013-12-06 13:22 [dpdk-dev] DPDK delaying individual packets infinitely Dmitry Vyal
@ 2014-01-02 13:59 ` Thomas Monjalon
  2014-01-02 16:43 ` Stephen Hemminger
  1 sibling, 0 replies; 3+ messages in thread
From: Thomas Monjalon @ 2014-01-02 13:59 UTC (permalink / raw)
  To: Dmitry Vyal; +Cc: dev

Hello Dmitry,

06/12/2013 14:22, Dmitry Vyal :
> The application consists of two threads, first one sends the packets
> from one port and second receives them on another. These are 2 ports of
> four-ports 1Gb NIC. It's detected as Intel Corporation 82576 Gigabit
> Network Connection (rev 01). Ports are connected with a patch cord.
[...]
> So looks like DPDK is buffering first 21 packets and doesn't flush the
> buffer in the first run even if I wait for several seconds after last
> packet is sent. In second run we see it forwards packets one by one
> after first burst of size 21.
> 
> I tried on DPDK-1.3.1 and DPDK-1.5.1 observing similar results. Is it
> expected behavior?

Have you made any progress on this bug ?
Is it an igb-specific issue ? Can you reproduce it with ixgbe ?

Thank you to keep us informed
-- 
Thomas

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] DPDK delaying individual packets infinitely
  2013-12-06 13:22 [dpdk-dev] DPDK delaying individual packets infinitely Dmitry Vyal
  2014-01-02 13:59 ` Thomas Monjalon
@ 2014-01-02 16:43 ` Stephen Hemminger
  1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2014-01-02 16:43 UTC (permalink / raw)
  To: Dmitry Vyal; +Cc: dev

On Fri, 06 Dec 2013 17:22:37 +0400
Dmitry Vyal <dmitryvyal@gmail.com> wrote:

> Hello list,
> 
> For some time I've been writing a custom packet generator coupled with a 
> packet receiver using DPDK.
> 
> It works greatly when used for generating  millions of packets, but 
> unexpectedly It has troubles generating a several dozens of packets or so.
> 
> The application consists of two threads, first one sends the packets 
> from one port and second receives them on another. These are 2 ports of 
> four-ports 1Gb NIC. It's detected as Intel Corporation 82576 Gigabit 
> Network Connection (rev 01). Ports are connected with a patch cord.

I saw something similar with some NIC's and backported a fix from
the Linux driver to make sure that threshold was not set incorrectly.
Maybe something similar is needed with your hardware or is missing in
the version of DPDK you ar using.



Subject: [PATCH 4/8] igb: workaround errata with wthresh on 82576

The 82576 has known issues which require the write threshold to be
set to 1.
See:
	http://download.intel.com/design/network/specupdt/82576_SPECUPDATE.pdf
---
 lib/librte_pmd_e1000/em_rxtx.c |    5 +++++
 1 file changed, 5 insertions(+)

--- a/lib/librte_pmd_e1000/em_rxtx.c	2013-04-10 13:59:55.166549303 -0700
+++ b/lib/librte_pmd_e1000/em_rxtx.c	2013-04-10 14:00:44.049915140 -0700
@@ -1270,6 +1270,8 @@ eth_em_tx_queue_setup(struct rte_eth_dev
 	txq->pthresh = tx_conf->tx_thresh.pthresh;
 	txq->hthresh = tx_conf->tx_thresh.hthresh;
 	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	if (txq->wthresh > 0 && hw->mac.type == e1000_82576)
+		txq->wthresh = 1;
 	txq->queue_id = queue_idx;
 	txq->port_id = dev->data->port_id;
 
@@ -1391,6 +1393,9 @@ eth_em_rx_queue_setup(struct rte_eth_dev
 	rxq->pthresh = rx_conf->rx_thresh.pthresh;
 	rxq->hthresh = rx_conf->rx_thresh.hthresh;
 	rxq->wthresh = rx_conf->rx_thresh.wthresh;
+	if (rxq->wthresh > 0 && hw->mac.type == e1000_82576)
+		rxq->wthresh = 1;
+
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-01-02 16:42 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-06 13:22 [dpdk-dev] DPDK delaying individual packets infinitely Dmitry Vyal
2014-01-02 13:59 ` Thomas Monjalon
2014-01-02 16:43 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).