From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2EFCA04A4 for ; Wed, 2 Mar 2022 04:56:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6551340141; Wed, 2 Mar 2022 04:56:53 +0100 (CET) Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) by mails.dpdk.org (Postfix) with ESMTP id DBE3F40040 for ; Wed, 2 Mar 2022 04:56:52 +0100 (CET) Received: by mail-ej1-f42.google.com with SMTP id qk11so1058667ejb.2 for ; Tue, 01 Mar 2022 19:56:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=abfsmyolwOxYk3PF+j1+NLc1jzirtI/hDI73jMSAs8w=; b=KrURWFOzl1RHZ0MHEgUkOgOc352VO4D/jbbYAX25ZL5Wk7xcETj7pCMVJ+6PK0YE/c kJGsZS16NeV9HnzhMZJkLZvR0fDsyjreWlWSZZ+OOupHDqxZGfX0gmSMUTVaZrbkHS2v PnXkiEpPaJeRlcaqoYpV1VSOOT8RehqGzCcKGH5hfpS5MSwvuXfibyOUV3ykYCYFsK2T dk+X18VdGRSrZVhK7ZxX8mnJqbw/xyax7qhikIxN832GMnS4vFKiP71lTYbhYWA2s+js bzuPXe7qT3jmvf8CciRsrtHTD9TSMq5E2l7+GM/G+ikDgE4iL/DtTX+IQw5blJ89A/AY DJkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=abfsmyolwOxYk3PF+j1+NLc1jzirtI/hDI73jMSAs8w=; b=50wGskI2/NgnEOXKpyEN55qFngQ78ipXJ0El+MjuzrAYM57AB3peWmHW3KHvMPCEf6 ks8Ib+Io+/942k7WZ28O3FHMq8zpkUxj7di1FVI7jd7SpZBp1byRPFz6HZFPkg5VLw5B gJnhTEIK3lWs+HCnMYXyTHgvgODZu9IXPAN137gR+Jc/pvHAD598ROMlTW0mFOSe1uO8 w2Ol9U3/3da+ZicHSLKFWS16G6+M7SixEGwH40ElEdQNgIQTT4yKQ9DnCatDAc1OUL7k 01GfOr2JLAMzU9OzTFKSy4Ff7HTGgBBo+U+1vEzfqHJJSGg7ze2shnvfAlDK2FycjgDt LCmg== X-Gm-Message-State: AOAM531Ip2+U+sehsxT1u/Z64Lh1HS7aY31Ge+e8Fmt1z64T5jt0QwUC ZLwlYjRdJiQiKj4NxQUxlYjTnyLxMTn/ljtz7SY= X-Google-Smtp-Source: ABdhPJyK/alNakLvDhAVdILZYliRBAHGrwsf8nt8ofdrwi8LSlj7bUJqsX6zd5OUbuQsOxV39iqjijblX1PAGbL+RQg= X-Received: by 2002:a17:906:407:b0:6cd:472b:2d5f with SMTP id d7-20020a170906040700b006cd472b2d5fmr20675540eja.573.1646193412398; Tue, 01 Mar 2022 19:56:52 -0800 (PST) MIME-Version: 1.0 References: <20220301115638.62387935@hermes.local> <20220301144602.73c8ff95@hermes.local> In-Reply-To: From: Cliff Burdick Date: Tue, 1 Mar 2022 19:56:39 -0800 Message-ID: Subject: Re: How to increase mbuf size in dpdk version 17.11 To: "Lombardo, Ed" Cc: Stephen Hemminger , "users@dpdk.org" Content-Type: multipart/alternative; boundary="00000000000093e20e05d93445b1" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --00000000000093e20e05d93445b1 Content-Type: text/plain; charset="UTF-8" That's showing you have 0 hugepages free. Maybe they weren't passed through to the VM properly? On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed wrote: > [root@vSTREAM_632 ~]# cat /proc/meminfo > > MemTotal: 32778372 kB > > MemFree: 15724124 kB > > MemAvailable: 15897392 kB > > Buffers: 18384 kB > > Cached: 526768 kB > > SwapCached: 0 kB > > Active: 355140 kB > > Inactive: 173360 kB > > Active(anon): 62472 kB > > Inactive(anon): 12484 kB > > Active(file): 292668 kB > > Inactive(file): 160876 kB > > Unevictable: 13998696 kB > > Mlocked: 13998696 kB > > SwapTotal: 3906556 kB > > SwapFree: 3906556 kB > > Dirty: 76 kB > > Writeback: 0 kB > > AnonPages: 13986156 kB > > Mapped: 95500 kB > > Shmem: 16864 kB > > Slab: 121952 kB > > SReclaimable: 71128 kB > > SUnreclaim: 50824 kB > > KernelStack: 4608 kB > > PageTables: 31524 kB > > NFS_Unstable: 0 kB > > Bounce: 0 kB > > WritebackTmp: 0 kB > > CommitLimit: 19247164 kB > > Committed_AS: 14170424 kB > > VmallocTotal: 34359738367 kB > > VmallocUsed: 212012 kB > > VmallocChunk: 34342301692 kB > > Percpu: 2816 kB > > HardwareCorrupted: 0 kB > > AnonHugePages: 13228032 kB > > CmaTotal: 0 kB > > CmaFree: 0 kB > > HugePages_Total: 1024 > > HugePages_Free: 0 > > HugePages_Rsvd: 0 > > HugePages_Surp: 0 > > Hugepagesize: 2048 kB > > DirectMap4k: 104320 kB > > DirectMap2M: 33449984 kB > > > > *From:* Cliff Burdick > *Sent:* Tuesday, March 1, 2022 10:45 PM > *To:* Lombardo, Ed > *Cc:* Stephen Hemminger ; users@dpdk.org > *Subject:* Re: How to increase mbuf size in dpdk version 17.11 > > > > *External Email:* This message originated outside of NETSCOUT. Do not > click links or open attachments unless you recognize the sender and know > the content is safe. > > Can you paste the output of "cat /proc/meminfo"? > > > > On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed > wrote: > > Here is the output from rte_mempool_dump() after creating the mbuf " > mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": > nb_mbuf_per_pool = 32768 > mb_size = 16640 > 16512 * 32768 = 541,065,216 > > mempool @0x17f811400 > flags=10 > pool=0x17f791180 > iova=0x80fe11400 > nb_mem_chunks=1 > size=32768 > populated_size=32768 > header_size=64 > elt_size=16640 > trailer_size=0 > total_obj_size=16704 > private_data_size=64 > avg bytes/object=16704.000000 > internal cache infos: > cache_size=250 > cache_count[0]=0 > ... > cache_count[126]=0 > cache_count[127]=0 > total_cache_count=0 > common_pool_count=32768 > no statistics available > > -----Original Message----- > From: Stephen Hemminger > Sent: Tuesday, March 1, 2022 5:46 PM > To: Cliff Burdick > Cc: Lombardo, Ed ; users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 13:37:07 -0800 > Cliff Burdick wrote: > > > Can you verify how many buffers you're allocating? I don't see how > > many you're allocating in this thread. > > > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed > > wrote: > > > > > Hi Stephen, > > > The VM is configured to have 32 GB of memory. > > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > > I don't mind having less mbufs with mbuf size of 16K vs original > > > mbuf size of 2K. > > > > > > Thanks, > > > Ed > > > > > > -----Original Message----- > > > From: Stephen Hemminger > > > Sent: Tuesday, March 1, 2022 2:57 PM > > > To: Lombardo, Ed > > > Cc: users@dpdk.org > > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > > > External Email: This message originated outside of NETSCOUT. Do not > > > click links or open attachments unless you recognize the sender and > > > know the content is safe. > > > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > > "Lombardo, Ed" wrote: > > > > > > > Hi, > > > > I have an application built with dpdk 17.11. > > > > During initialization I want to change the mbuf size from 2K to 16K. > > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > > The call to rte_pktmbuf_pool_create() returns success with my > changes. > > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > > mbuf > > > allocation failures. This value increments each time a packet > arrives. > > > > > > > > Is there any reference document explaining what causes this error? > > > > Is there a user guide I should follow to make the mbuf size > > > > change, > > > starting with the hugepage value? > > > > > > > > Thanks, > > > > Ed > > > > > > Did you check that you have enough memory in the system for the > > > larger footprint? > > > Using 16K per mbuf is going to cause lots of memory to be consumed. > > A little maths you can fill in your own values. > > Assuming you want 16K of data. > > You need at a minimum [1] > num_rxq := total number of receive queues > num_rxd := number of receive descriptors per receive queue > num_txq := total number of transmit queues (assume all can be full) > num_txd := number of transmit descriptors > num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * > burst_size > > Assuming you are using code copy/pasted from some example like l3fwd. > With 4 Rxq > > num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 > > Each mbuf element requires [2] > elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size > = 128 + 128 + 16K = 16640 > > obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) > = 16832 > > So total pool is > num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M > > > [1] Some devices line bnxt need multiple buffers per packet. > [2] Often applications want additional space per mbuf for meta-data. > > > --00000000000093e20e05d93445b1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
That's showing you have 0 hugepages free. Maybe they w= eren't passed through to the VM properly?

On Tue, Mar 1, 2022 at 7:50 PM= Lombardo, Ed <Ed.Lombardo@n= etscout.com> wrote:

[root@vSTREAM_632 ~]# cat /proc/meminfo

MemTotal:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 327783= 72 kB

MemFree:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1= 5724124 kB

MemAvailable:=C2=A0=C2=A0 15897392 kB<= /p>

Buffers:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 18384 kB

Cached:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 526768 kB

SwapCached:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 0 kB

Active:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 355140 kB

Inactive:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 173360 kB

Active(anon):=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 62472 kB=

Inactive(anon):=C2=A0=C2=A0=C2=A0 12484 kB=

Active(file):=C2=A0=C2=A0=C2=A0=C2=A0 292668 kB

Inactive(file):=C2=A0=C2=A0 160876 kB<= /p>

Unevictable:=C2=A0=C2=A0=C2=A0 13998696 kB=

Mlocked:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1= 3998696 kB

SwapTotal:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 39065= 56 kB

SwapFree:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = 3906556 kB

Dirty:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 76 kB

Writeback:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0 kB

AnonPages:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 13986156 kB=

Mapped:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 95500 kB

Shmem:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 16864 kB

Slab:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 121952 kB

SReclaimable:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 71128 kB=

SUnreclaim:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 50824 kB

KernelStack:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 4608 kB

PageTables:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 31524 kB

NFS_Unstable:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 0 kB

Bounce:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0 kB

WritebackTmp:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 0 kB

CommitLimit:=C2=A0=C2=A0=C2=A0 19247164 kB=

Committed_AS:=C2=A0=C2=A0 14170424 kB<= /p>

VmallocTotal:=C2=A0=C2=A0 34359738367 kB

VmallocUsed:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 212012 kB=

VmallocChunk:=C2=A0=C2=A0 34342301692 kB

Percpu:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0 2816 kB

HardwareCorrupted:=C2=A0=C2=A0=C2=A0=C2=A0 0 kB

AnonHugePages:=C2=A0 13228032 kB

CmaTotal:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0 kB

CmaFree:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0 kB

HugePages_Total:=C2=A0=C2=A0=C2=A0 1024

HugePages_Free:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 0

HugePages_Rsvd:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 0

HugePages_Surp:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 0

Hugepagesize:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 20= 48 kB

DirectMap4k:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 104320 kB=

DirectMap2M:=C2=A0=C2=A0=C2=A0 33449984 kB=

=C2=A0

From: Cliff Burdick <shaklee3@gmail.com>
Sent: Tuesday, March 1, 2022 10:45 PM
To: Lombardo, Ed <Ed.Lombardo@netscout.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; users@dpdk.org
Subject: Re: How to increase mbuf size in dpdk version 17.11<= u>

=C2=A0

External Email: This message originated outside of NETSCO= UT. Do not click links or open attachments unless you recognize the sender and know the content is safe.<= /span>

Can you paste the output= of "cat /proc/meminfo"?

=C2=A0

On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netsc= out.com> wrote:

Here is the output from= rte_mempool_dump() after creating the mbuf " mbuf_pool_create (mbuf_s= eg_size=3D16512, nb_mbuf=3D32768, socket_id=3D0)":
=C2=A0nb_mbuf_per_pool =3D 32768
=C2=A0mb_size =3D 16640
=C2=A016512 * 32768 =3D 541,065,216

mempool <mbuf_pool_socket_0>@0x17f811400
=C2=A0 flags=3D10
=C2=A0 pool=3D0x17f791180
=C2=A0 iova=3D0x80fe11400
=C2=A0 nb_mem_chunks=3D1
=C2=A0 size=3D32768
=C2=A0 populated_size=3D32768
=C2=A0 header_size=3D64
=C2=A0 elt_size=3D16640
=C2=A0 trailer_size=3D0
=C2=A0 total_obj_size=3D16704
=C2=A0 private_data_size=3D64
=C2=A0 avg bytes/object=3D16704.000000
=C2=A0 internal cache infos:
=C2=A0 =C2=A0 cache_size=3D250
=C2=A0 =C2=A0 cache_count[0]=3D0
...
=C2=A0 =C2=A0 cache_count[126]=3D0
=C2=A0 =C2=A0 cache_count[127]=3D0
=C2=A0 =C2=A0 total_cache_count=3D0
=C2=A0 common_pool_count=3D32768
=C2=A0 no statistics available

-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Tuesday, March 1, 2022 5:46 PM
To: Cliff Burdick <shaklee3@gmail.com>
Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: How to increase mbuf size in dpdk version 17.11

External Email: This message originated outside of NETSCOUT. Do not click l= inks or open attachments unless you recognize the sender and know the conte= nt is safe.

On Tue, 1 Mar 2022 13:37:07 -0800
Cliff Burdick <s= haklee3@gmail.com> wrote:

> Can you verify how many buffers you're allocating? I don't see= how
> many you're allocating in this thread.
>
> On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > wrote:
>
> > Hi Stephen,
> > The VM is configured to have 32 GB of memory.
> > Will dpdk consume the 2GB of hugepage memory for the mbufs?
> > I don't mind having less mbufs with mbuf size of 16K vs origi= nal
> > mbuf size of 2K.
> >
> > Thanks,
> > Ed
> >
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, March 1, 2022 2:57 PM
> > To: Lombardo, Ed <Ed.Lombardo@netscout.com>
> > Cc: users@dpd= k.org
> > Subject: Re: How to increase mbuf size in dpdk version 17.11
> >
> > External Email: This message originated outside of NETSCOUT. Do n= ot
> > click links or open attachments unless you recognize the sender a= nd
> > know the content is safe.
> >
> > On Tue, 1 Mar 2022 18:34:22 +0000
> > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
> >=C2=A0
> > > Hi,
> > > I have an application built with dpdk 17.11.
> > > During initialization I want to change the mbuf size from 2K= to 16K.
> > > I want to receive packet sizes of 8K or more in one mbuf. > > >
> > > The VM running the application is configured to have 2G huge= pages.
> > >
> > > I tried many things and I get an error when a packet arrives= .
> > >
> > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE t= hat I
> > changed from 2176 to ((2048*8)+128), where 128 is for headroom.= =C2=A0
> > > The call to rte_pktmbuf_pool_create() returns success with m= y changes.
> > > From the rte_mempool_dump() - "rx_nombuf" - Total = number of Rx
> > > mbuf
> > allocation failures.=C2=A0 This value increments each time a pack= et arrives.=C2=A0
> > >
> > > Is there any reference document explaining what causes this = error?
> > > Is there a user guide I should follow to make the mbuf size =
> > > change,
> > starting with the hugepage value?=C2=A0
> > >
> > > Thanks,
> > > Ed
> >
> > Did you check that you have enough memory in the system for the <= br> > > larger footprint?
> > Using 16K per mbuf is going to cause lots of memory to be consume= d.

A little maths you can fill in your own values.

Assuming you want 16K of data.

You need at a minimum [1]
=C2=A0 =C2=A0 num_rxq :=3D total number of receive queues
=C2=A0 =C2=A0 num_rxd :=3D number of receive descriptors per receive queue<= br> =C2=A0 =C2=A0 num_txq :=3D total number of transmit queues (assume all can = be full)
=C2=A0 =C2=A0 num_txd :=3D number of transmit descriptors
=C2=A0 =C2=A0 num_mbufs =3D num_rxq * num_rxd + num_txq * num_txd + num_cor= es * burst_size

Assuming you are using code copy/pasted from some example like l3fwd.
With 4 Rxq

=C2=A0 =C2=A0 num_mbufs =3D 4 * 1024 + 4 * 1024 + 4 * 32 =3D 8320

Each mbuf element requires [2]
=C2=A0 =C2=A0 elt_size =3D sizeof(struct rte_mbuf) + HEADROOM + mbuf_size =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=3D 128 + 128 + 16K =3D 166= 40

=C2=A0 =C2=A0 obj_size =3D rte_mempool_calc_obj_size(elt_size, 0, NULL)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=3D 16832

So total pool is
=C2=A0 =C2=A0 num_mbufs * obj_size =3D 8320 * 16832 =3D 140,042,240 ~ 139M<= br>

[1] Some devices line bnxt need multiple buffers per packet.
[2] Often applications want additional space per mbuf for meta-data.


--00000000000093e20e05d93445b1--