Can you verify how many buffers you're allocating? I don't see how many you're allocating in this thread. On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed wrote: > Hi Stephen, > The VM is configured to have 32 GB of memory. > Will dpdk consume the 2GB of hugepage memory for the mbufs? > I don't mind having less mbufs with mbuf size of 16K vs original mbuf size > of 2K. > > Thanks, > Ed > > -----Original Message----- > From: Stephen Hemminger > Sent: Tuesday, March 1, 2022 2:57 PM > To: Lombardo, Ed > Cc: users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 18:34:22 +0000 > "Lombardo, Ed" wrote: > > > Hi, > > I have an application built with dpdk 17.11. > > During initialization I want to change the mbuf size from 2K to 16K. > > I want to receive packet sizes of 8K or more in one mbuf. > > > > The VM running the application is configured to have 2G hugepages. > > > > I tried many things and I get an error when a packet arrives. > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf > allocation failures. This value increments each time a packet arrives. > > > > Is there any reference document explaining what causes this error? > > Is there a user guide I should follow to make the mbuf size change, > starting with the hugepage value? > > > > Thanks, > > Ed > > Did you check that you have enough memory in the system for the larger > footprint? > Using 16K per mbuf is going to cause lots of memory to be consumed. >