DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dan Kan <dan@nyansa.com>
To: dev@dpdk.org
Subject: [dpdk-dev] Unable to get RSS to work in testpmd and load balancing question
Date: Wed, 8 Jan 2014 15:24:38 -0800	[thread overview]
Message-ID: <CA+RRbcoYaAj-beZjLRjtn8TW6u=AWUXhstY0zTX=K586Z4ti-w@mail.gmail.com> (raw)

I'm evaluating DPDK using dpdk-1.5.1r1. I have been playing around with the
test-pmd sample app. I'm having a hard time to get RSS to work. I have a
2-port 82599 Intel X540-DA2 NIC. I'm running the following command to start
the app.

sudo ./testpmd -c 0x1f -n 2 -- -i --portmask=0x3 --nb-cores=4 --rxq=4
--txq=4

I have a packet generator that sends udp packets with various src IP.
According testpmd, I'm only receiving packets in port 0's queue 0. Packets
are not going into any other queues. I have attached the output from
testpmd.


  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 1/Queue= 0
-------
  RX-packets: 1000000        TX-packets: 1000000        TX-dropped:
0
  ---------------------- Forward statistics for port 0
----------------------
  RX-packets: 1000000        RX-dropped: 0             RX-total: 1000000
  TX-packets: 0              TX-dropped: 0             TX-total: 0

----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1
----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 1000000        TX-dropped: 0             TX-total: 1000000

----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all
ports+++++++++++++++
  RX-packets: 1000000        RX-dropped: 0             RX-total: 1000000
  TX-packets: 1000000        TX-dropped: 0             TX-total: 1000000

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

On a separate note, I also find that the CPU utilization using 1 forwarding
core for 2 ports seems to be better (in the aggregate sense) than using 2
forwarding cores for 2 ports. Running at 10gbps line rate of pktlen=400,
with 1 core, the core's utilization is 40%. With 2 cores, each core's
utilization would 30%, giving an aggregate of 60%.

I have a use case of only doing rxonly packet processing. From my initial
test, it seems that it's more efficient to have a single core read packets
from both ports, and distribute the packet using rte_ring instead of having
each core read from its port. The rte_eth_rx operations appear to be much
CPU intensive than rte_ring_dequeue operations.

Thanks in advance.

Dan

             reply	other threads:[~2014-01-08 23:23 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-08 23:24 Dan Kan [this message]
2014-01-09 18:49 ` Daniel Kan
2014-01-09 23:11   ` Thomas Monjalon
2014-01-10  1:02     ` Dan Kan
2014-01-10  2:07 Choi, Sy Jong
2014-01-10  2:35 ` Daniel Kan
2014-01-10 16:04   ` Michael Quicquaro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+RRbcoYaAj-beZjLRjtn8TW6u=AWUXhstY0zTX=K586Z4ti-w@mail.gmail.com' \
    --to=dan@nyansa.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).