DPDK usage discussions
 help / color / mirror / Atom feed
From: Mark Mason <mason+dpdk@steelypip.org>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] Mbuf pool/ring size question
Date: Thu, 26 Jul 2018 16:09:20 -0400	[thread overview]
Message-ID: <20180726200920.GA18059@postdiluvian.org> (raw)

Hi all,

I've got a question about mbuf pool and ring sizes - DPDK 17.02 PMD.

I've got a pipelined application running with RSS on a Cavium CN83XX.
40GE, 4 RSS queues wide and a pipeline 3 deep, ISOLCPUs with only DPDK
running on each of the 12 worker cores.  There are two RTE SP/SC rings
per RSS queue for communication between the pipeline stages - the
rings are 1024 deep, 512 cache, and an mbuf pool of 16K-1.

Performance is generally good - 40G in and 40G out with 1M flows of
512 byte packets, EXCEPT for intermittent drops on the order of a few
dozen to a few hundred packets/second.  I did some timing measurements
and found that sometimes a packet can take much longer to get through
the pipeline, despite being identical (except for destination address)
and taking an identical(ish) code path - sometimes two to three orders
of magnitude longer.

I tried measuring where the extra time was going, but pretty much
everything I tried perturbed the system, so I wasn't easily able to
get a clear answer.  One of my suspicions is the per-lcore mbuf cache
flush/fill, since the rx and tx are being done by different cores.  Is
there an efficient way to manage the mbuf pool in this case than
rte_pktmbuf_pool_create?  Some cores don't allocate or free mbufs, so
I'm also curious if I'm losing mbufs to the caches on those cores.

Since I have memory to burn I figured I could absorb any glitches by
increasing the RX/TX descriptor pool, mbuf pool, and ring sizes,
allowing more packets to be buffered during the glitches.  This didn't
help, which I guess makes sense if my issue is lock contentioon on the
mbuf cache, which I can't make larger.  Almost all of the DPDK
examples and applications I could find use roughly the same parameters
- 128-512 buffer descriptors, 4-16K mbuf pool, 1K ring sizes, etc.  It
seems that there are diminishing returns for increasing much beyond
these values, why is that?

                 reply	other threads:[~2018-07-26 20:09 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180726200920.GA18059@postdiluvian.org \
    --to=mason+dpdk@steelypip.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).