From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
To: Stephen Hemminger <stephen@networkplumber.org>,
Cliff Burdick <shaklee3@gmail.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: RE: How to increase mbuf size in dpdk version 17.11
Date: Wed, 2 Mar 2022 01:37:01 +0000 [thread overview]
Message-ID: <SJ0PR01MB6399C4B7293DCF4431EB9D4C8F039@SJ0PR01MB6399.prod.exchangelabs.com> (raw)
In-Reply-To: <20220301144602.73c8ff95@hermes.local>
Here is the output from rte_mempool_dump() after creating the mbuf " mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)":
nb_mbuf_per_pool = 32768
mb_size = 16640
16512 * 32768 = 541,065,216
mempool <mbuf_pool_socket_0>@0x17f811400
flags=10
pool=0x17f791180
iova=0x80fe11400
nb_mem_chunks=1
size=32768
populated_size=32768
header_size=64
elt_size=16640
trailer_size=0
total_obj_size=16704
private_data_size=64
avg bytes/object=16704.000000
internal cache infos:
cache_size=250
cache_count[0]=0
...
cache_count[126]=0
cache_count[127]=0
total_cache_count=0
common_pool_count=32768
no statistics available
-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Tuesday, March 1, 2022 5:46 PM
To: Cliff Burdick <shaklee3@gmail.com>
Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: How to increase mbuf size in dpdk version 17.11
External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
On Tue, 1 Mar 2022 13:37:07 -0800
Cliff Burdick <shaklee3@gmail.com> wrote:
> Can you verify how many buffers you're allocating? I don't see how
> many you're allocating in this thread.
>
> On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com>
> wrote:
>
> > Hi Stephen,
> > The VM is configured to have 32 GB of memory.
> > Will dpdk consume the 2GB of hugepage memory for the mbufs?
> > I don't mind having less mbufs with mbuf size of 16K vs original
> > mbuf size of 2K.
> >
> > Thanks,
> > Ed
> >
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, March 1, 2022 2:57 PM
> > To: Lombardo, Ed <Ed.Lombardo@netscout.com>
> > Cc: users@dpdk.org
> > Subject: Re: How to increase mbuf size in dpdk version 17.11
> >
> > External Email: This message originated outside of NETSCOUT. Do not
> > click links or open attachments unless you recognize the sender and
> > know the content is safe.
> >
> > On Tue, 1 Mar 2022 18:34:22 +0000
> > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
> >
> > > Hi,
> > > I have an application built with dpdk 17.11.
> > > During initialization I want to change the mbuf size from 2K to 16K.
> > > I want to receive packet sizes of 8K or more in one mbuf.
> > >
> > > The VM running the application is configured to have 2G hugepages.
> > >
> > > I tried many things and I get an error when a packet arrives.
> > >
> > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I
> > changed from 2176 to ((2048*8)+128), where 128 is for headroom.
> > > The call to rte_pktmbuf_pool_create() returns success with my changes.
> > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx
> > > mbuf
> > allocation failures. This value increments each time a packet arrives.
> > >
> > > Is there any reference document explaining what causes this error?
> > > Is there a user guide I should follow to make the mbuf size
> > > change,
> > starting with the hugepage value?
> > >
> > > Thanks,
> > > Ed
> >
> > Did you check that you have enough memory in the system for the
> > larger footprint?
> > Using 16K per mbuf is going to cause lots of memory to be consumed.
A little maths you can fill in your own values.
Assuming you want 16K of data.
You need at a minimum [1]
num_rxq := total number of receive queues
num_rxd := number of receive descriptors per receive queue
num_txq := total number of transmit queues (assume all can be full)
num_txd := number of transmit descriptors
num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size
Assuming you are using code copy/pasted from some example like l3fwd.
With 4 Rxq
num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320
Each mbuf element requires [2]
elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size
= 128 + 128 + 16K = 16640
obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL)
= 16832
So total pool is
num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M
[1] Some devices line bnxt need multiple buffers per packet.
[2] Often applications want additional space per mbuf for meta-data.
next prev parent reply other threads:[~2022-03-02 1:37 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-01 18:34 Lombardo, Ed
2022-03-01 19:56 ` Stephen Hemminger
2022-03-01 21:30 ` Lombardo, Ed
2022-03-01 21:37 ` Cliff Burdick
2022-03-01 22:46 ` Stephen Hemminger
2022-03-02 1:37 ` Lombardo, Ed [this message]
2022-03-02 3:45 ` Cliff Burdick
2022-03-02 3:50 ` Lombardo, Ed
2022-03-02 3:56 ` Cliff Burdick
2022-03-02 4:40 ` Stephen Hemminger
2022-03-02 5:48 ` Lombardo, Ed
2022-03-02 14:47 ` Cliff Burdick
2022-03-02 14:20 ` Lombardo, Ed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SJ0PR01MB6399C4B7293DCF4431EB9D4C8F039@SJ0PR01MB6399.prod.exchangelabs.com \
--to=ed.lombardo@netscout.com \
--cc=shaklee3@gmail.com \
--cc=stephen@networkplumber.org \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).