From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 192AAA04A4 for ; Wed, 2 Mar 2022 04:45:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8EBE040141; Wed, 2 Mar 2022 04:45:25 +0100 (CET) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id F1A1740040 for ; Wed, 2 Mar 2022 04:45:24 +0100 (CET) Received: by mail-ej1-f45.google.com with SMTP id d10so956744eje.10 for ; Tue, 01 Mar 2022 19:45:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=IVUfL8WNRvcbNrFbeQ/QebuWXp30PNob5lEGnwRjZ8k=; b=mxgH+86RH/Sp9M/M9ldqA9H1K/j+5ObqIRw1oBln3lRcuzxcLdZ1Zzr0iu7vPayJXA 9U54lNw1zPiB1dkvbsU+ItIVIg2LFTKDZSFLiPbLhD3Vcmv4xCY20izz5mvr7+0YhBpr ElQDSUInvfPppFxElIyi1ZEsQHwZi4WIKldJHMbt5R9v3e9ah/Yk9d1UPJ4qSk5ZVbMn 8kRBXr846FwbeE0h9sUjbi30aaVmC5ngbaUychfBGPhZaCAGAQuMNAmUAVwyI9/wet9A X1YwdeZUOu8S1S1jYPBeMWqpEpsF25h2S7lbg4G/fqOHZg/jeHnn1mJpzgxULc/MHYbM pXhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=IVUfL8WNRvcbNrFbeQ/QebuWXp30PNob5lEGnwRjZ8k=; b=LWWYsPO3NYoeu/lGHL8F2esL8iXAAGKuOqqysM4ujT/Rm8wYyh8CHm5Fi3eL6aMCbv 7txAPstB+eI4CD7wzNJT2oV1Lzm+Cwsm1HOb1BckqLHXgBRTaQ0X9g/t3YihItvftN0I 95DqiTpyPuvw1m9cP2WCat6j8t7ZVra83gNOgM6hvJbwKqdMkucwdHgC0+XQx2ZRaE36 xy1o1dpLbx/cmXS2GFFmpVMn1SLRgxgSRGhv8XfLdFC7vF8vAB9cyOsvZhR0mml2U/Dy 2TMMrTOW7JvA79Knuug0ZlDg2E5pyIvtPw5rkC9tnSK4O3rfbBtb65Hji4HGHvT5gbt/ IXiQ== X-Gm-Message-State: AOAM531q3IRu9+Wzpj0Siv7PzXtRdcqPg6yyUkEfFUH9Hb0Zt850nvLW x70wTNQPTUGbTkO3vn7FFEtrzh7n56pUI0LhtpKzNsdq X-Google-Smtp-Source: ABdhPJwm8bE7bLfEJciq3Wc4J+RPmCmYhIDshbKW9k1nHW+jJ8ZT0bkwydct7zmaA+WHztYJHxXGXkVD1VW0Ga1j8LU= X-Received: by 2002:a17:906:f9d9:b0:6d0:9f1c:d676 with SMTP id lj25-20020a170906f9d900b006d09f1cd676mr21099019ejb.584.1646192724335; Tue, 01 Mar 2022 19:45:24 -0800 (PST) MIME-Version: 1.0 References: <20220301115638.62387935@hermes.local> <20220301144602.73c8ff95@hermes.local> In-Reply-To: From: Cliff Burdick Date: Tue, 1 Mar 2022 19:45:11 -0800 Message-ID: Subject: Re: How to increase mbuf size in dpdk version 17.11 To: "Lombardo, Ed" Cc: Stephen Hemminger , "users@dpdk.org" Content-Type: multipart/alternative; boundary="00000000000090e07105d9341c34" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --00000000000090e07105d9341c34 Content-Type: text/plain; charset="UTF-8" Can you paste the output of "cat /proc/meminfo"? On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed wrote: > Here is the output from rte_mempool_dump() after creating the mbuf " > mbuf_pool_create (mbuf_seg_size=16512, nb_mbuf=32768, socket_id=0)": > nb_mbuf_per_pool = 32768 > mb_size = 16640 > 16512 * 32768 = 541,065,216 > > mempool @0x17f811400 > flags=10 > pool=0x17f791180 > iova=0x80fe11400 > nb_mem_chunks=1 > size=32768 > populated_size=32768 > header_size=64 > elt_size=16640 > trailer_size=0 > total_obj_size=16704 > private_data_size=64 > avg bytes/object=16704.000000 > internal cache infos: > cache_size=250 > cache_count[0]=0 > ... > cache_count[126]=0 > cache_count[127]=0 > total_cache_count=0 > common_pool_count=32768 > no statistics available > > -----Original Message----- > From: Stephen Hemminger > Sent: Tuesday, March 1, 2022 5:46 PM > To: Cliff Burdick > Cc: Lombardo, Ed ; users@dpdk.org > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > External Email: This message originated outside of NETSCOUT. Do not click > links or open attachments unless you recognize the sender and know the > content is safe. > > On Tue, 1 Mar 2022 13:37:07 -0800 > Cliff Burdick wrote: > > > Can you verify how many buffers you're allocating? I don't see how > > many you're allocating in this thread. > > > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed > > wrote: > > > > > Hi Stephen, > > > The VM is configured to have 32 GB of memory. > > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > > I don't mind having less mbufs with mbuf size of 16K vs original > > > mbuf size of 2K. > > > > > > Thanks, > > > Ed > > > > > > -----Original Message----- > > > From: Stephen Hemminger > > > Sent: Tuesday, March 1, 2022 2:57 PM > > > To: Lombardo, Ed > > > Cc: users@dpdk.org > > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > > > External Email: This message originated outside of NETSCOUT. Do not > > > click links or open attachments unless you recognize the sender and > > > know the content is safe. > > > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > > "Lombardo, Ed" wrote: > > > > > > > Hi, > > > > I have an application built with dpdk 17.11. > > > > During initialization I want to change the mbuf size from 2K to 16K. > > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > > The call to rte_pktmbuf_pool_create() returns success with my > changes. > > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx > > > > mbuf > > > allocation failures. This value increments each time a packet > arrives. > > > > > > > > Is there any reference document explaining what causes this error? > > > > Is there a user guide I should follow to make the mbuf size > > > > change, > > > starting with the hugepage value? > > > > > > > > Thanks, > > > > Ed > > > > > > Did you check that you have enough memory in the system for the > > > larger footprint? > > > Using 16K per mbuf is going to cause lots of memory to be consumed. > > A little maths you can fill in your own values. > > Assuming you want 16K of data. > > You need at a minimum [1] > num_rxq := total number of receive queues > num_rxd := number of receive descriptors per receive queue > num_txq := total number of transmit queues (assume all can be full) > num_txd := number of transmit descriptors > num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * > burst_size > > Assuming you are using code copy/pasted from some example like l3fwd. > With 4 Rxq > > num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 > > Each mbuf element requires [2] > elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size > = 128 + 128 + 16K = 16640 > > obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) > = 16832 > > So total pool is > num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M > > > [1] Some devices line bnxt need multiple buffers per packet. > [2] Often applications want additional space per mbuf for meta-data. > > > > --00000000000090e07105d9341c34 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Can you paste the output of "cat /proc/meminfo"?=

= On Tue, Mar 1, 2022 at 5:37 PM Lombardo, Ed <Ed.Lombardo@netscout.com> wrote:
Here is the output from rte_mempoo= l_dump() after creating the mbuf " mbuf_pool_create (mbuf_seg_size=3D1= 6512, nb_mbuf=3D32768, socket_id=3D0)":
=C2=A0nb_mbuf_per_pool =3D 32768
=C2=A0mb_size =3D 16640
=C2=A016512 * 32768 =3D 541,065,216

mempool <mbuf_pool_socket_0>@0x17f811400
=C2=A0 flags=3D10
=C2=A0 pool=3D0x17f791180
=C2=A0 iova=3D0x80fe11400
=C2=A0 nb_mem_chunks=3D1
=C2=A0 size=3D32768
=C2=A0 populated_size=3D32768
=C2=A0 header_size=3D64
=C2=A0 elt_size=3D16640
=C2=A0 trailer_size=3D0
=C2=A0 total_obj_size=3D16704
=C2=A0 private_data_size=3D64
=C2=A0 avg bytes/object=3D16704.000000
=C2=A0 internal cache infos:
=C2=A0 =C2=A0 cache_size=3D250
=C2=A0 =C2=A0 cache_count[0]=3D0
...
=C2=A0 =C2=A0 cache_count[126]=3D0
=C2=A0 =C2=A0 cache_count[127]=3D0
=C2=A0 =C2=A0 total_cache_count=3D0
=C2=A0 common_pool_count=3D32768
=C2=A0 no statistics available

-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Tuesday, March 1, 2022 5:46 PM
To: Cliff Burdick <shaklee3@gmail.com>
Cc: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: How to increase mbuf size in dpdk version 17.11

External Email: This message originated outside of NETSCOUT. Do not click l= inks or open attachments unless you recognize the sender and know the conte= nt is safe.

On Tue, 1 Mar 2022 13:37:07 -0800
Cliff Burdick <s= haklee3@gmail.com> wrote:

> Can you verify how many buffers you're allocating? I don't see= how
> many you're allocating in this thread.
>
> On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed <Ed.Lombardo@netscout.com> > wrote:
>
> > Hi Stephen,
> > The VM is configured to have 32 GB of memory.
> > Will dpdk consume the 2GB of hugepage memory for the mbufs?
> > I don't mind having less mbufs with mbuf size of 16K vs origi= nal
> > mbuf size of 2K.
> >
> > Thanks,
> > Ed
> >
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, March 1, 2022 2:57 PM
> > To: Lombardo, Ed <Ed.Lombardo@netscout.com>
> > Cc: users@dpd= k.org
> > Subject: Re: How to increase mbuf size in dpdk version 17.11
> >
> > External Email: This message originated outside of NETSCOUT. Do n= ot
> > click links or open attachments unless you recognize the sender a= nd
> > know the content is safe.
> >
> > On Tue, 1 Mar 2022 18:34:22 +0000
> > "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
> >=C2=A0
> > > Hi,
> > > I have an application built with dpdk 17.11.
> > > During initialization I want to change the mbuf size from 2K= to 16K.
> > > I want to receive packet sizes of 8K or more in one mbuf. > > >
> > > The VM running the application is configured to have 2G huge= pages.
> > >
> > > I tried many things and I get an error when a packet arrives= .
> > >
> > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE t= hat I
> > changed from 2176 to ((2048*8)+128), where 128 is for headroom.= =C2=A0
> > > The call to rte_pktmbuf_pool_create() returns success with m= y changes.
> > > From the rte_mempool_dump() - "rx_nombuf" - Total = number of Rx
> > > mbuf
> > allocation failures.=C2=A0 This value increments each time a pack= et arrives.=C2=A0
> > >
> > > Is there any reference document explaining what causes this = error?
> > > Is there a user guide I should follow to make the mbuf size =
> > > change,
> > starting with the hugepage value?=C2=A0
> > >
> > > Thanks,
> > > Ed
> >
> > Did you check that you have enough memory in the system for the <= br> > > larger footprint?
> > Using 16K per mbuf is going to cause lots of memory to be consume= d.

A little maths you can fill in your own values.

Assuming you want 16K of data.

You need at a minimum [1]
=C2=A0 =C2=A0 num_rxq :=3D total number of receive queues
=C2=A0 =C2=A0 num_rxd :=3D number of receive descriptors per receive queue<= br> =C2=A0 =C2=A0 num_txq :=3D total number of transmit queues (assume all can = be full)
=C2=A0 =C2=A0 num_txd :=3D number of transmit descriptors
=C2=A0 =C2=A0 num_mbufs =3D num_rxq * num_rxd + num_txq * num_txd + num_cor= es * burst_size

Assuming you are using code copy/pasted from some example like l3fwd.
With 4 Rxq

=C2=A0 =C2=A0 num_mbufs =3D 4 * 1024 + 4 * 1024 + 4 * 32 =3D 8320

Each mbuf element requires [2]
=C2=A0 =C2=A0 elt_size =3D sizeof(struct rte_mbuf) + HEADROOM + mbuf_size =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=3D 128 + 128 + 16K =3D 166= 40

=C2=A0 =C2=A0 obj_size =3D rte_mempool_calc_obj_size(elt_size, 0, NULL)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=3D 16832

So total pool is
=C2=A0 =C2=A0 num_mbufs * obj_size =3D 8320 * 16832 =3D 140,042,240 ~ 139M<= br>

[1] Some devices line bnxt need multiple buffers per packet.
[2] Often applications want additional space per mbuf for meta-data.



--00000000000090e07105d9341c34--