DPDK usage discussions
 help / color / mirror / Atom feed
From: Cliff Burdick <shaklee3@gmail.com>
To: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: How to increase mbuf size in dpdk version 17.11
Date: Tue, 1 Mar 2022 19:56:39 -0800	[thread overview]
Message-ID: <CA+Gp1nZueA3uM5xVFU8+mazSHSOhB8ZbKo04xZgP_s-2O9nB+A@mail.gmail.com> (raw)
In-Reply-To: <SJ0PR01MB6399BFC8E118356D1FF30E578F039@SJ0PR01MB6399.prod.exchangelabs.com>

[-- Attachment #1: Type: text/plain, Size: 6812 bytes --]

That's showing you have 0 hugepages free. Maybe they weren't passed through
to the VM properly?

On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed <Ed.Lombardo@netscout.com>
wrote:

> [root@vSTREAM_632 ~]# cat /proc/meminfo
>
> MemTotal:       32778372 kB
>
> MemFree:        15724124 kB
>
> MemAvailable:   15897392 kB
>
> Buffers:           18384 kB
>
> Cached:           526768 kB
>
> SwapCached:            0 kB
>
> Active:           355140 kB
>
> Inactive:         173360 kB
>
> Active(anon):      62472 kB
>
> Inactive(anon):    12484 kB
>
> Active(file):     292668 kB
>
> Inactive(file):   160876 kB
>
> Unevictable:    13998696 kB
>
> Mlocked:        13998696 kB
>
> SwapTotal:       3906556 kB
>
> SwapFree:        3906556 kB
>
> Dirty:                76 kB
>
> Writeback:             0 kB
>
> AnonPages:      13986156 kB
>
> Mapped:            95500 kB
>
> Shmem:             16864 kB
>
> Slab:             121952 kB
>
> SReclaimable:      71128 kB
>
> SUnreclaim:        50824 kB
>
> KernelStack:        4608 kB
>
> PageTables:        31524 kB
>
> NFS_Unstable:          0 kB
>
> Bounce:                0 kB
>
> WritebackTmp:          0 kB
>
> CommitLimit:    19247164 kB
>
> Committed_AS:   14170424 kB
>
> VmallocTotal:   34359738367 kB
>
> VmallocUsed:      212012 kB
>
> VmallocChunk:   34342301692 kB
>
> Percpu:             2816 kB
>
> HardwareCorrupted:     0 kB
>
> AnonHugePages:  13228032 kB
>
> CmaTotal:              0 kB
>
> CmaFree:               0 kB
>
> HugePages_Total:    1024
>
> HugePages_Free:        0
>
> HugePages_Rsvd:        0
>
> HugePages_Surp:        0
>
> Hugepagesize:       2048 kB
>
> DirectMap4k:      104320 kB
>
> DirectMap2M:    33449984 kB
>
>
>
> *From:* Cliff Burdick <shaklee3@gmail.com>
> *Sent:* Tuesday, March 1, 2022 10:45 PM
> *To:* Lombardo, Ed <Ed.Lombardo@netscout.com>
> *Cc:* Stephen Hemminger <stephen@networkplumber.org>; users@dpdk.org
> *Subject:* Re: How to increase mbuf size in dpdk version 17.11
>
>
>
> *External Email:* This message originated outside of NETSCOUT. Do not
> click links or open attachments unless you recognize the sender and know
> the content is safe.
>
> Can you paste the output of "cat /proc/meminfo"?
>
>
>
> On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netscout.com>
> wrote:
>
> Here is the output from rte_mempool_dump() after creating the mbuf "
> mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)":
>  nb_mbuf_per_pool = 32768
>  mb_size = 16640
>  16512 * 32768 = 541,065,216
>
> mempool <mbuf_pool_socket_0>@0x17f811400
>   flags=10
>   pool=0x17f791180
>   iova=0x80fe11400
>   nb_mem_chunks=1
>   size=32768
>   populated_size=32768
>   header_size=64
>   elt_size=16640
>   trailer_size=0
>   total_obj_size=16704
>   private_data_size=64
>   avg bytes/object=16704.000000
>   internal cache infos:
>     cache_size=250
>     cache_count[0]=0
> ...
>     cache_count[126]=0
>     cache_count[127]=0
>     total_cache_count=0
>   common_pool_count=32768
>   no statistics available
>
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Tuesday, March 1, 2022 5:46 PM
> To: Cliff Burdick <shaklee3@gmail.com>
> Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
> Subject: Re: How to increase mbuf size in dpdk version 17.11
>
> External Email: This message originated outside of NETSCOUT. Do not click
> links or open attachments unless you recognize the sender and know the
> content is safe.
>
> On Tue, 1 Mar 2022 13:37:07 -0800
> Cliff Burdick <shaklee3@gmail.com> wrote:
>
> > Can you verify how many buffers you're allocating? I don't see how
> > many you're allocating in this thread.
> >
> > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com>
> > wrote:
> >
> > > Hi Stephen,
> > > The VM is configured to have 32 GB of memory.
> > > Will dpdk consume the 2GB of hugepage memory for the mbufs?
> > > I don't mind having less mbufs with mbuf size of 16K vs original
> > > mbuf size of 2K.
> > >
> > > Thanks,
> > > Ed
> > >
> > > -----Original Message-----
> > > From: Stephen Hemminger <stephen@networkplumber.org>
> > > Sent: Tuesday, March 1, 2022 2:57 PM
> > > To: Lombardo, Ed <Ed.Lombardo@netscout.com>
> > > Cc: users@dpdk.org
> > > Subject: Re: How to increase mbuf size in dpdk version 17.11
> > >
> > > External Email: This message originated outside of NETSCOUT. Do not
> > > click links or open attachments unless you recognize the sender and
> > > know the content is safe.
> > >
> > > On Tue, 1 Mar 2022 18:34:22 +0000
> > > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
> > >
> > > > Hi,
> > > > I have an application built with dpdk 17.11.
> > > > During initialization I want to change the mbuf size from 2K to 16K.
> > > > I want to receive packet sizes of 8K or more in one mbuf.
> > > >
> > > > The VM running the application is configured to have 2G hugepages.
> > > >
> > > > I tried many things and I get an error when a packet arrives.
> > > >
> > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I
> > > changed from 2176 to ((2048*8)+128), where 128 is for headroom.
> > > > The call to rte_pktmbuf_pool_create() returns success with my
> changes.
> > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx
> > > > mbuf
> > > allocation failures.  This value increments each time a packet
> arrives.
> > > >
> > > > Is there any reference document explaining what causes this error?
> > > > Is there a user guide I should follow to make the mbuf size
> > > > change,
> > > starting with the hugepage value?
> > > >
> > > > Thanks,
> > > > Ed
> > >
> > > Did you check that you have enough memory in the system for the
> > > larger footprint?
> > > Using 16K per mbuf is going to cause lots of memory to be consumed.
>
> A little maths you can fill in your own values.
>
> Assuming you want 16K of data.
>
> You need at a minimum [1]
>     num_rxq := total number of receive queues
>     num_rxd := number of receive descriptors per receive queue
>     num_txq := total number of transmit queues (assume all can be full)
>     num_txd := number of transmit descriptors
>     num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores *
> burst_size
>
> Assuming you are using code copy/pasted from some example like l3fwd.
> With 4 Rxq
>
>     num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320
>
> Each mbuf element requires [2]
>     elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size
>              = 128 + 128 + 16K = 16640
>
>     obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL)
>              = 16832
>
> So total pool is
>     num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M
>
>
> [1] Some devices line bnxt need multiple buffers per packet.
> [2] Often applications want additional space per mbuf for meta-data.
>
>
>

[-- Attachment #2: Type: text/html, Size: 12357 bytes --]

  reply	other threads:[~2022-03-02  3:56 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-01 18:34 Lombardo, Ed
2022-03-01 19:56 ` Stephen Hemminger
2022-03-01 21:30   ` Lombardo, Ed
2022-03-01 21:37     ` Cliff Burdick
2022-03-01 22:46       ` Stephen Hemminger
2022-03-02  1:37         ` Lombardo, Ed
2022-03-02  3:45           ` Cliff Burdick
2022-03-02  3:50             ` Lombardo, Ed
2022-03-02  3:56               ` Cliff Burdick [this message]
2022-03-02  4:40                 ` Stephen Hemminger
2022-03-02  5:48                   ` Lombardo, Ed
2022-03-02 14:47                     ` Cliff Burdick
2022-03-02 14:20                 ` Lombardo, Ed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+Gp1nZueA3uM5xVFU8+mazSHSOhB8ZbKo04xZgP_s-2O9nB+A@mail.gmail.com \
    --to=shaklee3@gmail.com \
    --cc=Ed.Lombardo@netscout.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).