DPDK usage discussions
 help / color / mirror / Atom feed
From: Matt Laswell <laswell@infinite.io>
To: users <users@dpdk.org>
Subject: [dpdk-users] Occasional instability in RSS Hashes from X540 NIC
Date: Tue, 2 May 2017 17:36:43 -0500	[thread overview]
Message-ID: <CA+GnqArPP0x6-7gyMBTo-EkotOH7kfpaNWtP9sLmEEKrPswg1Q@mail.gmail.com> (raw)

Hey Folks,

I'm seeing some strange behavior with regard to the RSS hash values in my
applications and was hoping somebody might have some pointers on where to
look.  In my application, I'm using RSS to divide work among multiple
cores, each of which services a single RX queue.  When dealing with a
single long-lived TCP connection, I occasionally see packets going to the
wrong core.   That is, almost all of the packets in the connection go to
core 5 in this case, but every once in a while, one goes to core 0 instead.

Upon further investigation, I find that the problem packets always have the
RSS hash value in the mbuf set to zero.  They are therefore put in queue
zero, where they are read by core zero.  Other packets from the same
connection that occur immediately before and after the packet in question
have the correct hash value and therefore go to a different core.  This
plays havoc with my tracking of the TCP stream.

A few details:

   - Using an Intel X540-AT2 NIC and the igb_uio driver
   - DPDK 16.04
   - A particular packet in our workflow always encounters this problem.
   - Retransmissions of the packet in question also encounter the problem
   - The packet is IPv4, with header length of 20 (so no options), no
   fragmentation.
   - The only differences I can see in the IP header between packets that
   get the right hash value and those that get the wrong one are in the IP ID,
   total length, and checksum fields.
   - Using ETH_RSS_IPV4
   - We fill the key in with 0x6d5a to get symmetric hashing of both sides
   of the connection
   - We only configure RSS information at boot; things like the key or
   header fields are not being changed dynamically
   - Traffic load is light when the problem occurs

Is anybody aware of an errata, either in the NIC or the PMD's configuration
of it that might explain something like this?   Failing that, if you ran
into this sort of behavior, how would you approach finding the reason for
the error?  Every failure mode I can think of would tend to affect all of
the packets in the connection consistently, even if incorrectly.

Thanks in advance for any ideas.

--
Matt Laswell
laswell@infinite.io

                 reply	other threads:[~2017-05-02 22:36 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+GnqArPP0x6-7gyMBTo-EkotOH7kfpaNWtP9sLmEEKrPswg1Q@mail.gmail.com \
    --to=laswell@infinite.io \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).