DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Newman Poborsky <newman555p@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] one worker reading multiple ports
Date: Fri, 21 Nov 2014 14:44:30 +0000	[thread overview]
Message-ID: <20141121144430.GA9404@bricha3-MOBL3> (raw)
In-Reply-To: <CAHW=9PtKGJrh9HAXKSDa-3LUC_56D1oP3jtxS5+-mqwWKqo=4Q@mail.gmail.com>

On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote:
> So, since mempool is multi-consumer (by default), if one is used to
> configure queues on multiple NICs that have different socket owners, then
> mbuf allocation will fail? But if 2 NICs have the socket owner, everything
> should work fine?  Since I'm talking about 2 ports on the same NIC, they
> must have the same owner, RX receive should work with RX queues configured
> with the same mempool, right? But it my case it doesn't so I guess I'm
> missing something.

Actually, the mempools will work with NICs on multiple sockets - it's just
that performance is likely to suffer due to QPI usage. The mempools being on
one socket or the other is not going to break your application.

> 
> Any idea how can I troubleshoot why allocation fails with one mempool and
> works fine with each queue having its own mempool?

At a guess, I'd say that your mempools just aren't bit enough. Try doubling the
size of th mempool in the single-pool case and see if it helps things.

/Bruce

> 
> Thank you,
> 
> Newman
> 
> On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall@mhcomputing.net>
> wrote:
> 
> > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > > Thank you for your answer.
> > >
> > > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> > because
> > > inside ixgbe_recv_pkts() this fails:
> > > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> > >
> > > Does this mean that every RX queue should have its own rte_mempool?  If
> > so,
> > > are there any optimal values for: number of RX descriptors, per-queue
> > > rte_mempool size, number of hugepages (from what I understand, these 3
> > are
> > > correlated)?
> > >
> > > If I'm wrong, please explain why.
> > >
> > > Thanks!
> > >
> > > BR,
> > > Newman
> >
> > Newman,
> >
> > Mempools are created per NUMA node (ordinarily this means per processor
> > socket
> > if sockets > 1).
> >
> > When doing Tx / Rx Queue Setup, one should determine the socket which owns
> > the
> > given PCI NIC, and try to use memory on that same socket to handle traffic
> > for
> > that NIC and Queues.
> >
> > So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
> >
> > Then each of the Q * N queues will use the mempool from the socket closest
> > to
> > the card.
> >
> > Matthew.
> >

  reply	other threads:[~2014-11-21 14:33 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-20  8:33 Newman Poborsky
2014-11-20  8:56 ` De Lara Guarch, Pablo
2014-11-20 16:10   ` Newman Poborsky
2014-11-20 21:52     ` Matthew Hall
2014-11-21 14:03       ` Newman Poborsky
2014-11-21 14:44         ` Bruce Richardson [this message]
2014-11-21 22:55           ` Newman Poborsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141121144430.GA9404@bricha3-MOBL3 \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=newman555p@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).