* How to increase mbuf size in dpdk version 17.11 @ 2022-03-01 18:34 Lombardo, Ed 2022-03-01 19:56 ` Stephen Hemminger 0 siblings, 1 reply; 13+ messages in thread From: Lombardo, Ed @ 2022-03-01 18:34 UTC (permalink / raw) To: users [-- Attachment #1: Type: text/plain, Size: 851 bytes --] Hi, I have an application built with dpdk 17.11. During initialization I want to change the mbuf size from 2K to 16K. I want to receive packet sizes of 8K or more in one mbuf. The VM running the application is configured to have 2G hugepages. I tried many things and I get an error when a packet arrives. I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I changed from 2176 to ((2048*8)+128), where 128 is for headroom. The call to rte_pktmbuf_pool_create() returns success with my changes. From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf allocation failures. This value increments each time a packet arrives. Is there any reference document explaining what causes this error? Is there a user guide I should follow to make the mbuf size change, starting with the hugepage value? Thanks, Ed [-- Attachment #2: Type: text/html, Size: 3016 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-01 18:34 How to increase mbuf size in dpdk version 17.11 Lombardo, Ed @ 2022-03-01 19:56 ` Stephen Hemminger 2022-03-01 21:30 ` Lombardo, Ed 0 siblings, 1 reply; 13+ messages in thread From: Stephen Hemminger @ 2022-03-01 19:56 UTC (permalink / raw) To: Lombardo, Ed; +Cc: users On Tue, 1 Mar 2022 18:34:22 +0000 "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > Hi, > I have an application built with dpdk 17.11. > During initialization I want to change the mbuf size from 2K to 16K. > I want to receive packet sizes of 8K or more in one mbuf. > > The VM running the application is configured to have 2G hugepages. > > I tried many things and I get an error when a packet arrives. > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I changed from 2176 to ((2048*8)+128), where 128 is for headroom. > The call to rte_pktmbuf_pool_create() returns success with my changes. > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf allocation failures. This value increments each time a packet arrives. > > Is there any reference document explaining what causes this error? > Is there a user guide I should follow to make the mbuf size change, starting with the hugepage value? > > Thanks, > Ed Did you check that you have enough memory in the system for the larger footprint? Using 16K per mbuf is going to cause lots of memory to be consumed. ^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: How to increase mbuf size in dpdk version 17.11 2022-03-01 19:56 ` Stephen Hemminger @ 2022-03-01 21:30 ` Lombardo, Ed 2022-03-01 21:37 ` Cliff Burdick 0 siblings, 1 reply; 13+ messages in thread From: Lombardo, Ed @ 2022-03-01 21:30 UTC (permalink / raw) To: Stephen Hemminger; +Cc: users Hi Stephen, The VM is configured to have 32 GB of memory. Will dpdk consume the 2GB of hugepage memory for the mbufs? I don't mind having less mbufs with mbuf size of 16K vs original mbuf size of 2K. Thanks, Ed -----Original Message----- From: Stephen Hemminger <stephen@networkplumber.org> Sent: Tuesday, March 1, 2022 2:57 PM To: Lombardo, Ed <Ed.Lombardo@netscout.com> Cc: users@dpdk.org Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. On Tue, 1 Mar 2022 18:34:22 +0000 "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > Hi, > I have an application built with dpdk 17.11. > During initialization I want to change the mbuf size from 2K to 16K. > I want to receive packet sizes of 8K or more in one mbuf. > > The VM running the application is configured to have 2G hugepages. > > I tried many things and I get an error when a packet arrives. > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I changed from 2176 to ((2048*8)+128), where 128 is for headroom. > The call to rte_pktmbuf_pool_create() returns success with my changes. > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf allocation failures. This value increments each time a packet arrives. > > Is there any reference document explaining what causes this error? > Is there a user guide I should follow to make the mbuf size change, starting with the hugepage value? > > Thanks, > Ed Did you check that you have enough memory in the system for the larger footprint? Using 16K per mbuf is going to cause lots of memory to be consumed. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-01 21:30 ` Lombardo, Ed @ 2022-03-01 21:37 ` Cliff Burdick 2022-03-01 22:46 ` Stephen Hemminger 0 siblings, 1 reply; 13+ messages in thread From: Cliff Burdick @ 2022-03-01 21:37 UTC (permalink / raw) To: Lombardo, Ed; +Cc: Stephen Hemminger, users [-- Attachment #1: Type: text/plain, Size: 1993 bytes --] Can you verify how many buffers you're allocating? I don't see how many you're allocating in this thread. On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> wrote: > Hi Stephen, > The VM is configured to have 32 GB of memory. > Will dpdk consume the 2GB of hugepage memory for the mbufs? > I don't mind having less mbufs with mbuf size of 16K vs original mbuf size > of 2K. > > Thanks, > Ed > > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Tuesday, March 1, 2022 2:57 PM > To: Lombardo, Ed <Ed.Lombardo@netscout.com> > Cc: users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 18:34:22 +0000 > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > > > Hi, > > I have an application built with dpdk 17.11. > > During initialization I want to change the mbuf size from 2K to 16K. > > I want to receive packet sizes of 8K or more in one mbuf. > > > > The VM running the application is configured to have 2G hugepages. > > > > I tried many things and I get an error when a packet arrives. > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf > allocation failures. This value increments each time a packet arrives. > > > > Is there any reference document explaining what causes this error? > > Is there a user guide I should follow to make the mbuf size change, > starting with the hugepage value? > > > > Thanks, > > Ed > > Did you check that you have enough memory in the system for the larger > footprint? > Using 16K per mbuf is going to cause lots of memory to be consumed. > [-- Attachment #2: Type: text/html, Size: 2772 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-01 21:37 ` Cliff Burdick @ 2022-03-01 22:46 ` Stephen Hemminger 2022-03-02 1:37 ` Lombardo, Ed 0 siblings, 1 reply; 13+ messages in thread From: Stephen Hemminger @ 2022-03-01 22:46 UTC (permalink / raw) To: Cliff Burdick; +Cc: Lombardo, Ed, users On Tue, 1 Mar 2022 13:37:07 -0800 Cliff Burdick <shaklee3@gmail.com> wrote: > Can you verify how many buffers you're allocating? I don't see how many > you're allocating in this thread. > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > wrote: > > > Hi Stephen, > > The VM is configured to have 32 GB of memory. > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > I don't mind having less mbufs with mbuf size of 16K vs original mbuf size > > of 2K. > > > > Thanks, > > Ed > > > > -----Original Message----- > > From: Stephen Hemminger <stephen@networkplumber.org> > > Sent: Tuesday, March 1, 2022 2:57 PM > > To: Lombardo, Ed <Ed.Lombardo@netscout.com> > > Cc: users@dpdk.org > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > External Email: This message originated outside of NETSCOUT. Do not click > > links or open attachments unless you recognize the sender and know the > > content is safe. > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > > > > > Hi, > > > I have an application built with dpdk 17.11. > > > During initialization I want to change the mbuf size from 2K to 16K. > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf > > allocation failures. This value increments each time a packet arrives. > > > > > > Is there any reference document explaining what causes this error? > > > Is there a user guide I should follow to make the mbuf size change, > > starting with the hugepage value? > > > > > > Thanks, > > > Ed > > > > Did you check that you have enough memory in the system for the larger > > footprint? > > Using 16K per mbuf is going to cause lots of memory to be consumed. A little maths you can fill in your own values. Assuming you want 16K of data. You need at a minimum [1] num_rxq := total number of receive queues num_rxd := number of receive descriptors per receive queue num_txq := total number of transmit queues (assume all can be full) num_txd := number of transmit descriptors num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size Assuming you are using code copy/pasted from some example like l3fwd. With 4 Rxq num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 Each mbuf element requires [2] elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size = 128 + 128 + 16K = 16640 obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) = 16832 So total pool is num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M [1] Some devices line bnxt need multiple buffers per packet. [2] Often applications want additional space per mbuf for meta-data. ^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: How to increase mbuf size in dpdk version 17.11 2022-03-01 22:46 ` Stephen Hemminger @ 2022-03-02 1:37 ` Lombardo, Ed 2022-03-02 3:45 ` Cliff Burdick 0 siblings, 1 reply; 13+ messages in thread From: Lombardo, Ed @ 2022-03-02 1:37 UTC (permalink / raw) To: Stephen Hemminger, Cliff Burdick; +Cc: users Here is the output from rte_mempool_dump() after creating the mbuf " mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": nb_mbuf_per_pool = 32768 mb_size = 16640 16512 * 32768 = 541,065,216 mempool <mbuf_pool_socket_0>@0x17f811400 flags=10 pool=0x17f791180 iova=0x80fe11400 nb_mem_chunks=1 size=32768 populated_size=32768 header_size=64 elt_size=16640 trailer_size=0 total_obj_size=16704 private_data_size=64 avg bytes/object=16704.000000 internal cache infos: cache_size=250 cache_count[0]=0 ... cache_count[126]=0 cache_count[127]=0 total_cache_count=0 common_pool_count=32768 no statistics available -----Original Message----- From: Stephen Hemminger <stephen@networkplumber.org> Sent: Tuesday, March 1, 2022 5:46 PM To: Cliff Burdick <shaklee3@gmail.com> Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. On Tue, 1 Mar 2022 13:37:07 -0800 Cliff Burdick <shaklee3@gmail.com> wrote: > Can you verify how many buffers you're allocating? I don't see how > many you're allocating in this thread. > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > wrote: > > > Hi Stephen, > > The VM is configured to have 32 GB of memory. > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > I don't mind having less mbufs with mbuf size of 16K vs original > > mbuf size of 2K. > > > > Thanks, > > Ed > > > > -----Original Message----- > > From: Stephen Hemminger <stephen@networkplumber.org> > > Sent: Tuesday, March 1, 2022 2:57 PM > > To: Lombardo, Ed <Ed.Lombardo@netscout.com> > > Cc: users@dpdk.org > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > External Email: This message originated outside of NETSCOUT. Do not > > click links or open attachments unless you recognize the sender and > > know the content is safe. > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > > > > > Hi, > > > I have an application built with dpdk 17.11. > > > During initialization I want to change the mbuf size from 2K to 16K. > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > mbuf > > allocation failures. This value increments each time a packet arrives. > > > > > > Is there any reference document explaining what causes this error? > > > Is there a user guide I should follow to make the mbuf size > > > change, > > starting with the hugepage value? > > > > > > Thanks, > > > Ed > > > > Did you check that you have enough memory in the system for the > > larger footprint? > > Using 16K per mbuf is going to cause lots of memory to be consumed. A little maths you can fill in your own values. Assuming you want 16K of data. You need at a minimum [1] num_rxq := total number of receive queues num_rxd := number of receive descriptors per receive queue num_txq := total number of transmit queues (assume all can be full) num_txd := number of transmit descriptors num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size Assuming you are using code copy/pasted from some example like l3fwd. With 4 Rxq num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 Each mbuf element requires [2] elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size = 128 + 128 + 16K = 16640 obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) = 16832 So total pool is num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M [1] Some devices line bnxt need multiple buffers per packet. [2] Often applications want additional space per mbuf for meta-data. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-02 1:37 ` Lombardo, Ed @ 2022-03-02 3:45 ` Cliff Burdick 2022-03-02 3:50 ` Lombardo, Ed 0 siblings, 1 reply; 13+ messages in thread From: Cliff Burdick @ 2022-03-02 3:45 UTC (permalink / raw) To: Lombardo, Ed; +Cc: Stephen Hemminger, users [-- Attachment #1: Type: text/plain, Size: 4663 bytes --] Can you paste the output of "cat /proc/meminfo"? On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netscout.com> wrote: > Here is the output from rte_mempool_dump() after creating the mbuf " > mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": > nb_mbuf_per_pool = 32768 > mb_size = 16640 > 16512 * 32768 = 541,065,216 > > mempool <mbuf_pool_socket_0>@0x17f811400 > flags=10 > pool=0x17f791180 > iova=0x80fe11400 > nb_mem_chunks=1 > size=32768 > populated_size=32768 > header_size=64 > elt_size=16640 > trailer_size=0 > total_obj_size=16704 > private_data_size=64 > avg bytes/object=16704.000000 > internal cache infos: > cache_size=250 > cache_count[0]=0 > ... > cache_count[126]=0 > cache_count[127]=0 > total_cache_count=0 > common_pool_count=32768 > no statistics available > > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Tuesday, March 1, 2022 5:46 PM > To: Cliff Burdick <shaklee3@gmail.com> > Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 13:37:07 -0800 > Cliff Burdick <shaklee3@gmail.com> wrote: > > > Can you verify how many buffers you're allocating? I don't see how > > many you're allocating in this thread. > > > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > > wrote: > > > > > Hi Stephen, > > > The VM is configured to have 32 GB of memory. > > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > > I don't mind having less mbufs with mbuf size of 16K vs original > > > mbuf size of 2K. > > > > > > Thanks, > > > Ed > > > > > > -----Original Message----- > > > From: Stephen Hemminger <stephen@networkplumber.org> > > > Sent: Tuesday, March 1, 2022 2:57 PM > > > To: Lombardo, Ed <Ed.Lombardo@netscout.com> > > > Cc: users@dpdk.org > > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > > > External Email: This message originated outside of NETSCOUT. Do not > > > click links or open attachments unless you recognize the sender and > > > know the content is safe. > > > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > > > > > > > Hi, > > > > I have an application built with dpdk 17.11. > > > > During initialization I want to change the mbuf size from 2K to 16K. > > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > > The call to rte_pktmbuf_pool_create() returns success with my > changes. > > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > > mbuf > > > allocation failures. This value increments each time a packet > arrives. > > > > > > > > Is there any reference document explaining what causes this error? > > > > Is there a user guide I should follow to make the mbuf size > > > > change, > > > starting with the hugepage value? > > > > > > > > Thanks, > > > > Ed > > > > > > Did you check that you have enough memory in the system for the > > > larger footprint? > > > Using 16K per mbuf is going to cause lots of memory to be consumed. > > A little maths you can fill in your own values. > > Assuming you want 16K of data. > > You need at a minimum [1] > num_rxq := total number of receive queues > num_rxd := number of receive descriptors per receive queue > num_txq := total number of transmit queues (assume all can be full) > num_txd := number of transmit descriptors > num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * > burst_size > > Assuming you are using code copy/pasted from some example like l3fwd. > With 4 Rxq > > num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 > > Each mbuf element requires [2] > elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size > = 128 + 128 + 16K = 16640 > > obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) > = 16832 > > So total pool is > num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M > > > [1] Some devices line bnxt need multiple buffers per packet. > [2] Often applications want additional space per mbuf for meta-data. > > > > [-- Attachment #2: Type: text/html, Size: 6501 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: How to increase mbuf size in dpdk version 17.11 2022-03-02 3:45 ` Cliff Burdick @ 2022-03-02 3:50 ` Lombardo, Ed 2022-03-02 3:56 ` Cliff Burdick 0 siblings, 1 reply; 13+ messages in thread From: Lombardo, Ed @ 2022-03-02 3:50 UTC (permalink / raw) To: Cliff Burdick; +Cc: Stephen Hemminger, users [-- Attachment #1: Type: text/plain, Size: 6665 bytes --] [root@vSTREAM_632 ~]# cat /proc/meminfo MemTotal: 32778372 kB MemFree: 15724124 kB MemAvailable: 15897392 kB Buffers: 18384 kB Cached: 526768 kB SwapCached: 0 kB Active: 355140 kB Inactive: 173360 kB Active(anon): 62472 kB Inactive(anon): 12484 kB Active(file): 292668 kB Inactive(file): 160876 kB Unevictable: 13998696 kB Mlocked: 13998696 kB SwapTotal: 3906556 kB SwapFree: 3906556 kB Dirty: 76 kB Writeback: 0 kB AnonPages: 13986156 kB Mapped: 95500 kB Shmem: 16864 kB Slab: 121952 kB SReclaimable: 71128 kB SUnreclaim: 50824 kB KernelStack: 4608 kB PageTables: 31524 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 19247164 kB Committed_AS: 14170424 kB VmallocTotal: 34359738367 kB VmallocUsed: 212012 kB VmallocChunk: 34342301692 kB Percpu: 2816 kB HardwareCorrupted: 0 kB AnonHugePages: 13228032 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 1024 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 104320 kB DirectMap2M: 33449984 kB From: Cliff Burdick <shaklee3@gmail.com> Sent: Tuesday, March 1, 2022 10:45 PM To: Lombardo, Ed <Ed.Lombardo@netscout.com> Cc: Stephen Hemminger <stephen@networkplumber.org>; users@dpdk.org Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. Can you paste the output of "cat /proc/meminfo"? On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> wrote: Here is the output from rte_mempool_dump() after creating the mbuf " mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": nb_mbuf_per_pool = 32768 mb_size = 16640 16512 * 32768 = 541,065,216 mempool <mbuf_pool_socket_0>@0x17f811400 flags=10 pool=0x17f791180 iova=0x80fe11400 nb_mem_chunks=1 size=32768 populated_size=32768 header_size=64 elt_size=16640 trailer_size=0 total_obj_size=16704 private_data_size=64 avg bytes/object=16704.000000 internal cache infos: cache_size=250 cache_count[0]=0 ... cache_count[126]=0 cache_count[127]=0 total_cache_count=0 common_pool_count=32768 no statistics available -----Original Message----- From: Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>> Sent: Tuesday, March 1, 2022 5:46 PM To: Cliff Burdick <shaklee3@gmail.com<mailto:shaklee3@gmail.com>> Cc: Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>>; users@dpdk.org<mailto:users@dpdk.org> Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. On Tue, 1 Mar 2022 13:37:07 -0800 Cliff Burdick <shaklee3@gmail.com<mailto:shaklee3@gmail.com>> wrote: > Can you verify how many buffers you're allocating? I don't see how > many you're allocating in this thread. > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> > wrote: > > > Hi Stephen, > > The VM is configured to have 32 GB of memory. > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > I don't mind having less mbufs with mbuf size of 16K vs original > > mbuf size of 2K. > > > > Thanks, > > Ed > > > > -----Original Message----- > > From: Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>> > > Sent: Tuesday, March 1, 2022 2:57 PM > > To: Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> > > Cc: users@dpdk.org<mailto:users@dpdk.org> > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > External Email: This message originated outside of NETSCOUT. Do not > > click links or open attachments unless you recognize the sender and > > know the content is safe. > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > "Lombardo, Ed" <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> wrote: > > > > > Hi, > > > I have an application built with dpdk 17.11. > > > During initialization I want to change the mbuf size from 2K to 16K. > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > mbuf > > allocation failures. This value increments each time a packet arrives. > > > > > > Is there any reference document explaining what causes this error? > > > Is there a user guide I should follow to make the mbuf size > > > change, > > starting with the hugepage value? > > > > > > Thanks, > > > Ed > > > > Did you check that you have enough memory in the system for the > > larger footprint? > > Using 16K per mbuf is going to cause lots of memory to be consumed. A little maths you can fill in your own values. Assuming you want 16K of data. You need at a minimum [1] num_rxq := total number of receive queues num_rxd := number of receive descriptors per receive queue num_txq := total number of transmit queues (assume all can be full) num_txd := number of transmit descriptors num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size Assuming you are using code copy/pasted from some example like l3fwd. With 4 Rxq num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 Each mbuf element requires [2] elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size = 128 + 128 + 16K = 16640 obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) = 16832 So total pool is num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M [1] Some devices line bnxt need multiple buffers per packet. [2] Often applications want additional space per mbuf for meta-data. [-- Attachment #2: Type: text/html, Size: 14411 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-02 3:50 ` Lombardo, Ed @ 2022-03-02 3:56 ` Cliff Burdick 2022-03-02 4:40 ` Stephen Hemminger 2022-03-02 14:20 ` Lombardo, Ed 0 siblings, 2 replies; 13+ messages in thread From: Cliff Burdick @ 2022-03-02 3:56 UTC (permalink / raw) To: Lombardo, Ed; +Cc: Stephen Hemminger, users [-- Attachment #1: Type: text/plain, Size: 6812 bytes --] That's showing you have 0 hugepages free. Maybe they weren't passed through to the VM properly? On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed <Ed.Lombardo@netscout.com> wrote: > [root@vSTREAM_632 ~]# cat /proc/meminfo > > MemTotal: 32778372 kB > > MemFree: 15724124 kB > > MemAvailable: 15897392 kB > > Buffers: 18384 kB > > Cached: 526768 kB > > SwapCached: 0 kB > > Active: 355140 kB > > Inactive: 173360 kB > > Active(anon): 62472 kB > > Inactive(anon): 12484 kB > > Active(file): 292668 kB > > Inactive(file): 160876 kB > > Unevictable: 13998696 kB > > Mlocked: 13998696 kB > > SwapTotal: 3906556 kB > > SwapFree: 3906556 kB > > Dirty: 76 kB > > Writeback: 0 kB > > AnonPages: 13986156 kB > > Mapped: 95500 kB > > Shmem: 16864 kB > > Slab: 121952 kB > > SReclaimable: 71128 kB > > SUnreclaim: 50824 kB > > KernelStack: 4608 kB > > PageTables: 31524 kB > > NFS_Unstable: 0 kB > > Bounce: 0 kB > > WritebackTmp: 0 kB > > CommitLimit: 19247164 kB > > Committed_AS: 14170424 kB > > VmallocTotal: 34359738367 kB > > VmallocUsed: 212012 kB > > VmallocChunk: 34342301692 kB > > Percpu: 2816 kB > > HardwareCorrupted: 0 kB > > AnonHugePages: 13228032 kB > > CmaTotal: 0 kB > > CmaFree: 0 kB > > HugePages_Total: 1024 > > HugePages_Free: 0 > > HugePages_Rsvd: 0 > > HugePages_Surp: 0 > > Hugepagesize: 2048 kB > > DirectMap4k: 104320 kB > > DirectMap2M: 33449984 kB > > > > *From:* Cliff Burdick <shaklee3@gmail.com> > *Sent:* Tuesday, March 1, 2022 10:45 PM > *To:* Lombardo, Ed <Ed.Lombardo@netscout.com> > *Cc:* Stephen Hemminger <stephen@networkplumber.org>; users@dpdk.org > *Subject:* Re: How to increase mbuf size in dpdk version 17.11 > > > > *External Email:* This message originated outside of NETSCOUT. Do not > click links or open attachments unless you recognize the sender and know > the content is safe. > > Can you paste the output of "cat /proc/meminfo"? > > > > On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > wrote: > > Here is the output from rte_mempool_dump() after creating the mbuf " > mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": > nb_mbuf_per_pool = 32768 > mb_size = 16640 > 16512 * 32768 = 541,065,216 > > mempool <mbuf_pool_socket_0>@0x17f811400 > flags=10 > pool=0x17f791180 > iova=0x80fe11400 > nb_mem_chunks=1 > size=32768 > populated_size=32768 > header_size=64 > elt_size=16640 > trailer_size=0 > total_obj_size=16704 > private_data_size=64 > avg bytes/object=16704.000000 > internal cache infos: > cache_size=250 > cache_count[0]=0 > ... > cache_count[126]=0 > cache_count[127]=0 > total_cache_count=0 > common_pool_count=32768 > no statistics available > > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Tuesday, March 1, 2022 5:46 PM > To: Cliff Burdick <shaklee3@gmail.com> > Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 13:37:07 -0800 > Cliff Burdick <shaklee3@gmail.com> wrote: > > > Can you verify how many buffers you're allocating? I don't see how > > many you're allocating in this thread. > > > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > > wrote: > > > > > Hi Stephen, > > > The VM is configured to have 32 GB of memory. > > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > > I don't mind having less mbufs with mbuf size of 16K vs original > > > mbuf size of 2K. > > > > > > Thanks, > > > Ed > > > > > > -----Original Message----- > > > From: Stephen Hemminger <stephen@networkplumber.org> > > > Sent: Tuesday, March 1, 2022 2:57 PM > > > To: Lombardo, Ed <Ed.Lombardo@netscout.com> > > > Cc: users@dpdk.org > > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > > > External Email: This message originated outside of NETSCOUT. Do not > > > click links or open attachments unless you recognize the sender and > > > know the content is safe. > > > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote: > > > > > > > Hi, > > > > I have an application built with dpdk 17.11. > > > > During initialization I want to change the mbuf size from 2K to 16K. > > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > > The call to rte_pktmbuf_pool_create() returns success with my > changes. > > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > > mbuf > > > allocation failures. This value increments each time a packet > arrives. > > > > > > > > Is there any reference document explaining what causes this error? > > > > Is there a user guide I should follow to make the mbuf size > > > > change, > > > starting with the hugepage value? > > > > > > > > Thanks, > > > > Ed > > > > > > Did you check that you have enough memory in the system for the > > > larger footprint? > > > Using 16K per mbuf is going to cause lots of memory to be consumed. > > A little maths you can fill in your own values. > > Assuming you want 16K of data. > > You need at a minimum [1] > num_rxq := total number of receive queues > num_rxd := number of receive descriptors per receive queue > num_txq := total number of transmit queues (assume all can be full) > num_txd := number of transmit descriptors > num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * > burst_size > > Assuming you are using code copy/pasted from some example like l3fwd. > With 4 Rxq > > num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 > > Each mbuf element requires [2] > elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size > = 128 + 128 + 16K = 16640 > > obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) > = 16832 > > So total pool is > num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M > > > [1] Some devices line bnxt need multiple buffers per packet. > [2] Often applications want additional space per mbuf for meta-data. > > > [-- Attachment #2: Type: text/html, Size: 12357 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-02 3:56 ` Cliff Burdick @ 2022-03-02 4:40 ` Stephen Hemminger 2022-03-02 5:48 ` Lombardo, Ed 2022-03-02 14:20 ` Lombardo, Ed 1 sibling, 1 reply; 13+ messages in thread From: Stephen Hemminger @ 2022-03-02 4:40 UTC (permalink / raw) To: Cliff Burdick; +Cc: Lombardo, Ed, users On Tue, 1 Mar 2022 19:56:39 -0800 Cliff Burdick <shaklee3@gmail.com> wrote: > That's showing you have 0 hugepages free. Maybe they weren't passed through > to the VM properly? Which hypervisor? Not all hypervisors really support hugepages. ^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: How to increase mbuf size in dpdk version 17.11 2022-03-02 4:40 ` Stephen Hemminger @ 2022-03-02 5:48 ` Lombardo, Ed 2022-03-02 14:47 ` Cliff Burdick 0 siblings, 1 reply; 13+ messages in thread From: Lombardo, Ed @ 2022-03-02 5:48 UTC (permalink / raw) To: Stephen Hemminger, Cliff Burdick; +Cc: users I am using vmware hypervisor. -----Original Message----- From: Stephen Hemminger <stephen@networkplumber.org> Sent: Tuesday, March 1, 2022 11:41 PM To: Cliff Burdick <shaklee3@gmail.com> Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. On Tue, 1 Mar 2022 19:56:39 -0800 Cliff Burdick <shaklee3@gmail.com> wrote: > That's showing you have 0 hugepages free. Maybe they weren't passed > through to the VM properly? Which hypervisor? Not all hypervisors really support hugepages. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to increase mbuf size in dpdk version 17.11 2022-03-02 5:48 ` Lombardo, Ed @ 2022-03-02 14:47 ` Cliff Burdick 0 siblings, 0 replies; 13+ messages in thread From: Cliff Burdick @ 2022-03-02 14:47 UTC (permalink / raw) To: Lombardo, Ed; +Cc: Stephen Hemminger, users [-- Attachment #1: Type: text/plain, Size: 1020 bytes --] did you follow the instructions here: https://docs.vmware.com/en/VMware-vCloud-NFV-OpenStack-Edition/3.0/vmwa-vcloud-nfv30-performance-tunning/GUID-1F05987F-012B-4BC4-9015-CDE3C991C68C.html ? On Tue, Mar 1, 2022, 21:48 Lombardo, Ed <Ed.Lombardo@netscout.com> wrote: > I am using vmware hypervisor. > > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Tuesday, March 1, 2022 11:41 PM > To: Cliff Burdick <shaklee3@gmail.com> > Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 19:56:39 -0800 > Cliff Burdick <shaklee3@gmail.com> wrote: > > > That's showing you have 0 hugepages free. Maybe they weren't passed > > through to the VM properly? > > Which hypervisor? Not all hypervisors really support hugepages. > [-- Attachment #2: Type: text/html, Size: 2042 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: How to increase mbuf size in dpdk version 17.11 2022-03-02 3:56 ` Cliff Burdick 2022-03-02 4:40 ` Stephen Hemminger @ 2022-03-02 14:20 ` Lombardo, Ed 1 sibling, 0 replies; 13+ messages in thread From: Lombardo, Ed @ 2022-03-02 14:20 UTC (permalink / raw) To: Cliff Burdick; +Cc: Stephen Hemminger, users [-- Attachment #1: Type: text/plain, Size: 7742 bytes --] Hi, When I return to 2K mbuf size the /proc/meminfo hugepage info looks exactly the same. HugePages_Total: 1024 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB I did not make any changes to the arguments to rte_eal_init() when I tried 16K mbuf configuration. From: Cliff Burdick <shaklee3@gmail.com> Sent: Tuesday, March 1, 2022 10:57 PM To: Lombardo, Ed <Ed.Lombardo@netscout.com> Cc: Stephen Hemminger <stephen@networkplumber.org>; users@dpdk.org Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. That's showing you have 0 hugepages free. Maybe they weren't passed through to the VM properly? On Tue, Mar 1, 2022 at 7:50 PM Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> wrote: [root@vSTREAM_632 ~]# cat /proc/meminfo MemTotal: 32778372 kB MemFree: 15724124 kB MemAvailable: 15897392 kB Buffers: 18384 kB Cached: 526768 kB SwapCached: 0 kB Active: 355140 kB Inactive: 173360 kB Active(anon): 62472 kB Inactive(anon): 12484 kB Active(file): 292668 kB Inactive(file): 160876 kB Unevictable: 13998696 kB Mlocked: 13998696 kB SwapTotal: 3906556 kB SwapFree: 3906556 kB Dirty: 76 kB Writeback: 0 kB AnonPages: 13986156 kB Mapped: 95500 kB Shmem: 16864 kB Slab: 121952 kB SReclaimable: 71128 kB SUnreclaim: 50824 kB KernelStack: 4608 kB PageTables: 31524 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 19247164 kB Committed_AS: 14170424 kB VmallocTotal: 34359738367 kB VmallocUsed: 212012 kB VmallocChunk: 34342301692 kB Percpu: 2816 kB HardwareCorrupted: 0 kB AnonHugePages: 13228032 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 1024 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 104320 kB DirectMap2M: 33449984 kB From: Cliff Burdick <shaklee3@gmail.com<mailto:shaklee3@gmail.com>> Sent: Tuesday, March 1, 2022 10:45 PM To: Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> Cc: Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>>; users@dpdk.org<mailto:users@dpdk.org> Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. Can you paste the output of "cat /proc/meminfo"? On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> wrote: Here is the output from rte_mempool_dump() after creating the mbuf " mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": nb_mbuf_per_pool = 32768 mb_size = 16640 16512 * 32768 = 541,065,216 mempool <mbuf_pool_socket_0>@0x17f811400 flags=10 pool=0x17f791180 iova=0x80fe11400 nb_mem_chunks=1 size=32768 populated_size=32768 header_size=64 elt_size=16640 trailer_size=0 total_obj_size=16704 private_data_size=64 avg bytes/object=16704.000000 internal cache infos: cache_size=250 cache_count[0]=0 ... cache_count[126]=0 cache_count[127]=0 total_cache_count=0 common_pool_count=32768 no statistics available -----Original Message----- From: Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>> Sent: Tuesday, March 1, 2022 5:46 PM To: Cliff Burdick <shaklee3@gmail.com<mailto:shaklee3@gmail.com>> Cc: Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>>; users@dpdk.org<mailto:users@dpdk.org> Subject: Re: How to increase mbuf size in dpdk version 17.11 External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. On Tue, 1 Mar 2022 13:37:07 -0800 Cliff Burdick <shaklee3@gmail.com<mailto:shaklee3@gmail.com>> wrote: > Can you verify how many buffers you're allocating? I don't see how > many you're allocating in this thread. > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> > wrote: > > > Hi Stephen, > > The VM is configured to have 32 GB of memory. > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > I don't mind having less mbufs with mbuf size of 16K vs original > > mbuf size of 2K. > > > > Thanks, > > Ed > > > > -----Original Message----- > > From: Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>> > > Sent: Tuesday, March 1, 2022 2:57 PM > > To: Lombardo, Ed <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> > > Cc: users@dpdk.org<mailto:users@dpdk.org> > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > External Email: This message originated outside of NETSCOUT. Do not > > click links or open attachments unless you recognize the sender and > > know the content is safe. > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > "Lombardo, Ed" <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>> wrote: > > > > > Hi, > > > I have an application built with dpdk 17.11. > > > During initialization I want to change the mbuf size from 2K to 16K. > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > mbuf > > allocation failures. This value increments each time a packet arrives. > > > > > > Is there any reference document explaining what causes this error? > > > Is there a user guide I should follow to make the mbuf size > > > change, > > starting with the hugepage value? > > > > > > Thanks, > > > Ed > > > > Did you check that you have enough memory in the system for the > > larger footprint? > > Using 16K per mbuf is going to cause lots of memory to be consumed. A little maths you can fill in your own values. Assuming you want 16K of data. You need at a minimum [1] num_rxq := total number of receive queues num_rxd := number of receive descriptors per receive queue num_txq := total number of transmit queues (assume all can be full) num_txd := number of transmit descriptors num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size Assuming you are using code copy/pasted from some example like l3fwd. With 4 Rxq num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 Each mbuf element requires [2] elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size = 128 + 128 + 16K = 16640 obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) = 16832 So total pool is num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M [1] Some devices line bnxt need multiple buffers per packet. [2] Often applications want additional space per mbuf for meta-data. [-- Attachment #2: Type: text/html, Size: 20491 bytes --] ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2022-03-02 14:47 UTC | newest] Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-03-01 18:34 How to increase mbuf size in dpdk version 17.11 Lombardo, Ed 2022-03-01 19:56 ` Stephen Hemminger 2022-03-01 21:30 ` Lombardo, Ed 2022-03-01 21:37 ` Cliff Burdick 2022-03-01 22:46 ` Stephen Hemminger 2022-03-02 1:37 ` Lombardo, Ed 2022-03-02 3:45 ` Cliff Burdick 2022-03-02 3:50 ` Lombardo, Ed 2022-03-02 3:56 ` Cliff Burdick 2022-03-02 4:40 ` Stephen Hemminger 2022-03-02 5:48 ` Lombardo, Ed 2022-03-02 14:47 ` Cliff Burdick 2022-03-02 14:20 ` Lombardo, Ed
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).