From: amit sehas <cun23@yahoo.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: rte_pktmbuf_alloc() out of rte_mbufs
Date: Tue, 26 Nov 2024 17:57:59 +0000 (UTC) [thread overview]
Message-ID: <450859216.2968663.1732643879391@mail.yahoo.com> (raw)
In-Reply-To: <20241122084557.726e38e7@hermes.local>
rte_mempool_dump() with debugging enabled finds the following data, below i see that put_bulk is 40671864 and get_success_bulk is 40675959, the difference between these is 4095, which is
exactly the number of buffers. I will try to dig into the meaning of put_bulk and get_success_bulk to
determine if there is some kind of buffer leak that is occurring ... some amount of code review did not indicate an obvious issue .
mempool <mbuf_pool3>@0x16c4e2b00
flags=10
socket_id=-1
pool=0x16c4da840
iova=0x3ac4e2b00
nb_mem_chunks=1
size=4095
populated_size=4095
header_size=64
elt_size=2176
trailer_size=128
total_obj_size=2368
private_data_size=64
ops_index=0
ops_name: <ring_mp_mc>
avg bytes/object=2368.578266
stats:
put_bulk=40671864
put_objs=40671864
put_common_pool_bulk=4095
put_common_pool_objs=4095
get_common_pool_bulk=455
get_common_pool_objs=4095
get_success_bulk=40675959
get_success_objs=40675959
get_fail_bulk=1
get_fail_objs=1
On Friday, November 22, 2024 at 08:46:00 AM PST, Stephen Hemminger <stephen@networkplumber.org> wrote:
On Fri, 22 Nov 2024 02:38:55 +0000 (UTC)
amit sehas <cun23@yahoo.com> wrote:
> I am frequently running into out of mbufs when allocating packets. When this happens is there a way to dump counts of which buffers are where so we know what is going on?
>
> I know that each rte_mbuf pool also has per cpu core cache to speed up alloc/free, and some of the buffers will end up there and if one were to never utilize a particular core for a particular mpool perhaps those mbufs are lost ... that is my rough guess ...
>
> How do you debug out of mbufs issue?
>
> regards
The function rte_mempool_dump() will tell you some information about the status of a particular mempool.
If you enable mempool statistics you can get more info.
The best way to size a memory pool is to account for all the possible places mbuf's can be waiting.
Something like:
Num Port * Num RxQ * Num RxD + Num Port * Num TxQ * Num TxD + Num Lcores * Burst Size + Num Lcores * Cache size
Often running out of mbufs is because of failure to free an recveived mbuf, or a buggy driver.
next prev parent reply other threads:[~2024-11-26 17:58 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <67781150.1429748.1732243135675.ref@mail.yahoo.com>
2024-11-22 2:38 ` amit sehas
2024-11-22 16:45 ` Stephen Hemminger
2024-11-26 17:57 ` amit sehas [this message]
2024-11-26 23:50 ` amit sehas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=450859216.2968663.1732643879391@mail.yahoo.com \
--to=cun23@yahoo.com \
--cc=stephen@networkplumber.org \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).