DPDK patches and discussions
 help / color / mirror / Atom feed
From: Newman Poborsky <newman555p@gmail.com>
To: Matthew Hall <mhall@mhcomputing.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] one worker reading multiple ports
Date: Fri, 21 Nov 2014 15:03:25 +0100	[thread overview]
Message-ID: <CAHW=9PtKGJrh9HAXKSDa-3LUC_56D1oP3jtxS5+-mqwWKqo=4Q@mail.gmail.com> (raw)
In-Reply-To: <20141120215233.GA15551@mhcomputing.net>

So, since mempool is multi-consumer (by default), if one is used to
configure queues on multiple NICs that have different socket owners, then
mbuf allocation will fail? But if 2 NICs have the socket owner, everything
should work fine?  Since I'm talking about 2 ports on the same NIC, they
must have the same owner, RX receive should work with RX queues configured
with the same mempool, right? But it my case it doesn't so I guess I'm
missing something.

Any idea how can I troubleshoot why allocation fails with one mempool and
works fine with each queue having its own mempool?

Thank you,

Newman

On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall@mhcomputing.net>
wrote:

> On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > Thank you for your answer.
> >
> > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> because
> > inside ixgbe_recv_pkts() this fails:
> > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> >
> > Does this mean that every RX queue should have its own rte_mempool?  If
> so,
> > are there any optimal values for: number of RX descriptors, per-queue
> > rte_mempool size, number of hugepages (from what I understand, these 3
> are
> > correlated)?
> >
> > If I'm wrong, please explain why.
> >
> > Thanks!
> >
> > BR,
> > Newman
>
> Newman,
>
> Mempools are created per NUMA node (ordinarily this means per processor
> socket
> if sockets > 1).
>
> When doing Tx / Rx Queue Setup, one should determine the socket which owns
> the
> given PCI NIC, and try to use memory on that same socket to handle traffic
> for
> that NIC and Queues.
>
> So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
>
> Then each of the Q * N queues will use the mempool from the socket closest
> to
> the card.
>
> Matthew.
>

  reply	other threads:[~2014-11-21 13:52 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-20  8:33 Newman Poborsky
2014-11-20  8:56 ` De Lara Guarch, Pablo
2014-11-20 16:10   ` Newman Poborsky
2014-11-20 21:52     ` Matthew Hall
2014-11-21 14:03       ` Newman Poborsky [this message]
2014-11-21 14:44         ` Bruce Richardson
2014-11-21 22:55           ` Newman Poborsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHW=9PtKGJrh9HAXKSDa-3LUC_56D1oP3jtxS5+-mqwWKqo=4Q@mail.gmail.com' \
    --to=newman555p@gmail.com \
    --cc=dev@dpdk.org \
    --cc=mhall@mhcomputing.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).