DPDK patches and discussions
 help / color / mirror / Atom feed
From: Kamaraj P <pkamaraj@gmail.com>
To: Lance Richardson <lance.richardson@broadcom.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>,
	Ferruh Yigit <ferruh.yigit@intel.com>,  dev <dev@dpdk.org>,
	"Burakov, Anatoly" <anatoly.burakov@intel.com>
Subject: Re: [dpdk-dev] DPDK Max Mbuf Allocation
Date: Thu, 16 Sep 2021 10:42:54 +0530	[thread overview]
Message-ID: <CAG8PAao3su2eRNb31wWMko8kE1wT2aX2P1pxeisvAYyfv5XBqQ@mail.gmail.com> (raw)
In-Reply-To: <CADyeNECnL+xhhRdc0mv7V=B9chMsTG4TNw0SG+Y8arqJkLrVcQ@mail.gmail.com>

Hello All,
Thank you for the clarification. Is there any guildliness(readme) that can
be added as part of each release notes.
This would help us to align our DPDK upgrade in our product.

Thanks again for your support


Thanks,
Kamaraj

On Tue, Sep 14, 2021 at 7:35 AM Lance Richardson <
lance.richardson@broadcom.com> wrote:

> On Mon, Sep 13, 2021 at 11:51 AM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Mon, 13 Sep 2021 16:43:18 +0100
> > Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> > > On 9/13/2021 5:56 AM, Kamaraj P wrote:
> > > > Hello All,
> > > >
> > > > Would like to understand or if there are any guidelines to allocate
> the max
> > > > no of mbuf per NIC ?
> > > > For example, if i have defined as below:
> > > > #define RX_RING_SIZE 1024
> > > > #define TX_RING_SIZE 1024
> > > >
> > > > The Maximum RX/TX queues can be defined as 8 per NIC, What would be
> the max
> > > > no of mbuf can be allocated per NIC ?
> > > > Please share if there is any guildliness or any limitation to
> increase the
> > > > mbuf ?
> > > >
> > >
> > > Hi Kamaraj,
> > >
> > > Max number of the queues and max number of the descriptors per queue
> depends to
> > > HW and changes form HW to HW.
> > > This information is shared by the PMDs that application needs to take
> into
> > > account. For example the descriptor limitations are provided by
> > > 'rx_desc_lim'/'tx_desc_lim' etc.
> > >
> > > After descriptor number is defined, testpmd uses the mbuf count as
> following,
> > > which can be taken as sample:
> > >
> > > nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + RTE_TEST_TX_DESC_MAX +
> MAX_PKT_BURST +
> > >                    (nb_lcores * mb_mempool_cache);
> > >
> >
> > It is a a little more complicated since some devices (like bnxt) allocate
> > multiple mbuf's per packet. Something like
>
> +1, and it's worth noting that this makes it difficult to run many
> sample applications
> on the bnxt PMD.
>
> >
> >  nb_mbuf_per_pool = MAX_RX_QUEUES * (RTE_TEST_RX_DESC_MAX * MBUF_PER_RX
> + MBUF_PER_Q)
> >                 + MAX_TX_QUEUE * RTE_TEST_TX_DESC_MAX * MBUF_PER_TX
> >                 + nb_lcores * MAX_PKT_BURST
> >                 + nb_lcores * mb_mempool_cache
> >                 + nb_lcores * PKTMBUF_POOL_RESERVED;
> >
> > Ended up with
> >    MBUF_PER_RX = 3
>
> For releases up to around 20.11, 3 is the correct value (one mbuf per RX
> ring entry, two mbufs in each aggregation ring per RX ring entry).
> Currently
> the value for MBUF_PER_RX would be 5 (four mbufs in each aggregation
> ring for each RX ring entry). BTW, a future version will avoid populating
> aggregation rings with mbufs when LRO or scattered receive are not
> enabled.
>
> >    MBUF_PER_Q  = 6
>
> Hmm, it's not clear where these would be allocated in the bnxt PMD. It
> seems to me that MBUF_PER_Q is zero for the bnxt PMD.
>
> > and when using jumbo
> >    MBUF_PER_TX = MAX_MTU / MBUF_DATA_SIZE = 2
>
> I don't think this is correct... the bnxt PMD allocates TX descriptor rings
> with the requested number of descriptors from tx_queue_setup(), this is
> the maximum number of mbufs that can be present in a TX ring.
>
> >
> >
> >
> >
> >
>

      reply	other threads:[~2021-09-16  5:13 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-13  4:56 Kamaraj P
2021-09-13 15:43 ` Ferruh Yigit
2021-09-13 15:51   ` Stephen Hemminger
2021-09-14  2:04     ` Lance Richardson
2021-09-16  5:12       ` Kamaraj P [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG8PAao3su2eRNb31wWMko8kE1wT2aX2P1pxeisvAYyfv5XBqQ@mail.gmail.com \
    --to=pkamaraj@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=lance.richardson@broadcom.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).