DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Benson, Bryan" <bmbenson@amazon.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] RX checksum offloading
Date: Thu, 7 Nov 2013 03:09:15 +0000	[thread overview]
Message-ID: <A029A4295D154649BCC3E22A692B26022DD6B7@ex10-mbx-36002.ant.amazon.com> (raw)
In-Reply-To: <A029A4295D154649BCC3E22A692B26022DD412@ex10-mbx-36002.ant.amazon.com>

All,
The receive checksum issue seems to be caused by using a RX_FREE_THRESH value that is 32 or larger, as validated by the test-pmd application.
I used 2 different packet types - bad IP checksum sent to port 0 and bad tcp checksum to port 1.  The way I was sending the packets did not vary between the various tests.

Below is a brief summary of the testing - additional gory text is attached.

0, 8, 16, 24, 28, 30, 31 are okay.

Values above 32 are not, as tested with 32, 64 and 128.

I will continue researching this - The version I used to test this is the vanilla version of DPDK 1.3, but with the RSC disable patch applied (helps when there are many ACKs).

Thanks,
Bryan Benson

 [bmbenson]~/1.3.1.1/DPDK% sudo ./x86_64-default-linuxapp-gcc/app/testpmd -c 0xFF00FF00 -n 4 -b 0000:06:00.0 -- --portmask=0x3 --nb-cores=2 --enable-rx-cksum --disable-hw-vlan --disable-rss --crc-strip --rxd=1024 --txd=1024 -i             
... <text removed>
testpmd> set fwd csum
Set csum packet forwarding mode
testpmd> start
  csum packet forwarding - CRC stripping enabled - packets/burst=16
  nb forwarding cores=2 - nb forwarding ports=2
  RX queues=1 - RX desc=1024 - RX free threshold=0
  RX threshold registers: pthresh=8 hthresh=8 wthresh=4
  TX queues=1 - TX desc=1024 - TX free threshold=0
  TX threshold registers: pthresh=36 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0
testpmd> stop

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 490511         RX-dropped: 0             RX-total: 490511
  Bad-ipcsum: 490496         Bad-l4csum: 0              
  TX-packets: 488720         TX-dropped: 0             TX-total: 488720
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 488804         RX-dropped: 0             RX-total: 488804
  Bad-ipcsum: 0              Bad-l4csum: 488704         
  TX-packets: 490511         TX-dropped: 0             TX-total: 490511
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 979315         RX-dropped: 0             RX-total: 979315
  TX-packets: 979231         TX-dropped: 0             TX-total: 979231
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

[bmbenson]~/1.3.1.1/DPDK% sudo ./x86_64-default-linuxapp-gcc/app/testpmd -c 0xFF00FF00 -n 4 -b 0000:06:00.0 -- --portmask=0x3 --nb-cores=2 --enable-rx-cksum --disable-hw-vlan --disable-rss --crc-strip --rxd=1024 --txd=1024 --rxfreet=32 -i
... <text removed>
testpmd> set fwd csum
Set csum packet forwarding mode
testpmd> start
  csum packet forwarding - CRC stripping enabled - packets/burst=16
  nb forwarding cores=2 - nb forwarding ports=2
  RX queues=1 - RX desc=1024 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=4
  TX queues=1 - TX desc=1024 - TX free threshold=0
  TX threshold registers: pthresh=36 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0
testpmd> stop

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 378894         RX-dropped: 0             RX-total: 378894
  Bad-ipcsum: 0              Bad-l4csum: 0              
  TX-packets: 381197         TX-dropped: 0             TX-total: 381197
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 381197         RX-dropped: 0             RX-total: 381197
  Bad-ipcsum: 0              Bad-l4csum: 0              
  TX-packets: 378894         TX-dropped: 0             TX-total: 378894
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 760091         RX-dropped: 0             RX-total: 760091
  TX-packets: 760091         TX-dropped: 0             TX-total: 760091
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


  reply	other threads:[~2013-11-07  3:09 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-07  0:00 Benson, Bryan
2013-11-07  3:09 ` Benson, Bryan [this message]
2013-11-07  3:44   ` Benson, Bryan
2013-11-07  9:06     ` Thomas Monjalon
2013-11-07 11:44       ` Thomas Monjalon
2013-11-07 21:53         ` Benson, Bryan
2013-11-07 21:50 Benson, Bryan
2013-11-08 10:27 ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A029A4295D154649BCC3E22A692B26022DD6B7@ex10-mbx-36002.ant.amazon.com \
    --to=bmbenson@amazon.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).