DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Ferruh Yigit <ferruh.yigit@intel.com>
Cc: Kamaraj P <pkamaraj@gmail.com>, dev <dev@dpdk.org>,
	"Burakov, Anatoly" <anatoly.burakov@intel.com>
Subject: Re: [dpdk-dev] DPDK Max Mbuf Allocation
Date: Mon, 13 Sep 2021 08:51:04 -0700	[thread overview]
Message-ID: <20210913085104.064bcc39@hermes.local> (raw)
In-Reply-To: <2a936b73-9935-6cd9-6d05-780d2f28982f@intel.com>

On Mon, 13 Sep 2021 16:43:18 +0100
Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 9/13/2021 5:56 AM, Kamaraj P wrote:
> > Hello All,
> > 
> > Would like to understand or if there are any guidelines to allocate the max
> > no of mbuf per NIC ?
> > For example, if i have defined as below:
> > #define RX_RING_SIZE 1024
> > #define TX_RING_SIZE 1024
> > 
> > The Maximum RX/TX queues can be defined as 8 per NIC, What would be the max
> > no of mbuf can be allocated per NIC ?
> > Please share if there is any guildliness or any limitation to increase the
> > mbuf ?
> >   
> 
> Hi Kamaraj,
> 
> Max number of the queues and max number of the descriptors per queue depends to
> HW and changes form HW to HW.
> This information is shared by the PMDs that application needs to take into
> account. For example the descriptor limitations are provided by
> 'rx_desc_lim'/'tx_desc_lim' etc.
> 
> After descriptor number is defined, testpmd uses the mbuf count as following,
> which can be taken as sample:
> 
> nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST +
>                    (nb_lcores * mb_mempool_cache);
> 

It is a a little more complicated since some devices (like bnxt) allocate
multiple mbuf's per packet. Something like

 nb_mbuf_per_pool = MAX_RX_QUEUES * (RTE_TEST_RX_DESC_MAX * MBUF_PER_RX + MBUF_PER_Q)
                + MAX_TX_QUEUE * RTE_TEST_TX_DESC_MAX * MBUF_PER_TX
                + nb_lcores * MAX_PKT_BURST
                + nb_lcores * mb_mempool_cache
                + nb_lcores * PKTMBUF_POOL_RESERVED;

Ended up with
   MBUF_PER_RX = 3
   MBUF_PER_Q  = 6
and when using jumbo
   MBUF_PER_TX = MAX_MTU / MBUF_DATA_SIZE = 2






  reply	other threads:[~2021-09-13 15:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-13  4:56 Kamaraj P
2021-09-13 15:43 ` Ferruh Yigit
2021-09-13 15:51   ` Stephen Hemminger [this message]
2021-09-14  2:04     ` Lance Richardson
2021-09-16  5:12       ` Kamaraj P

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210913085104.064bcc39@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=pkamaraj@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).