From: Ferruh Yigit <ferruh.yigit@amd.com>
To: Yasin CANER <yasinncaner@gmail.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>, users@dpdk.org
Subject: Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
Date: Thu, 18 May 2023 15:56:39 +0100 [thread overview]
Message-ID: <4f53f3be-0bae-e204-5737-7735b4a2ba5b@amd.com> (raw)
In-Reply-To: <CAP5epcMADBx-GZ5jTp+PtA-XJYVqUXarE1TP20X_eWb=27WCJQ@mail.gmail.com>
On 5/18/2023 9:14 AM, Yasin CANER wrote:
> Hello Ferruh,
>
> Thanks for your kind response. Also thanks to Stephen.
>
> Even if 1 packet is consumed from the kernel , each time rx_kni
> allocates another 32 units. After a while all mempool is used in alloc_q
> from kni. there is not any room for it.
>
What you described continues until 'alloc_q' is full, by default fifo
length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your
mempool?
You can consider either increasing mempool size, or decreasing 'alloc_q'
fifo length, but reducing fifo size may cause performance issues so you
need to evaluate that option.
> Do you think my mistake is using one and common mempool usage both kni
> and eth?
>
Using same mempool for both is fine.
> If it needs a separate mempool , i'd like to note in docs.
>
> Best regards.
>
> Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>, 17
> May 2023 Çar, 20:53 tarihinde şunu yazdı:
>
> On 5/9/2023 12:13 PM, Yasin CANER wrote:
> > Hello,
> >
> > I draw a flow via asciiflow to explain myself better. Problem is after
> > transmitting packets(mbufs) , it never puts in the kni->free_q to back
> > to the original pool. Each cycle, it allocates another 32 units that
> > cause leaks. Or I am missing something.
> >
> > I already tried the rte_eth_tx_done_cleanup() function but it
> didn't fix
> > anything.
> >
> > I am working on a patch to fix this issue but I am not sure if there
> > is another way.
> >
> > Best regards.
> >
> > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>
> >
> >
> > unsigned
> > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
> unsigned
> > int num)
> > {
> > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
> >
> > /* If buffers removed, allocate mbufs and then put them into
> alloc_q */
> > /* Question, how to test buffers is removed or not?*/
> > if (ret)
> > kni_allocate_mbufs(kni);
> >
> > return ret;
> > }
> >
>
> Selam Yasin,
>
>
> You can expect 'kni->alloc_q' fifo to be full, this is not a memory
> leak.
>
> As you pointed out, number of mbufs consumed by kernel from 'alloc_q'
> and number of mbufs added to 'alloc_q' is not equal and this is
> expected.
>
> Target here is to prevent buffer underflow from kernel perspective, so
> it will always have available mbufs for new packets.
> That is why new mbufs are added to 'alloc_q' at worst same or sometimes
> higher rate than it is consumed.
>
> You should calculate your mbuf requirement with the assumption that
> 'kni->alloc_q' will be full of mbufs.
>
>
> 'kni->alloc_q' is freed when kni is removed.
> Since 'alloc_q' holds physical address of the mbufs, it is a little
> challenging to free them in the userspace, that is why first kernel
> tries to move mbufs to 'kni->free_q' fifo, please check
> 'kni_net_release_fifo_phy()' for it.
>
> If all moved to 'free_q' fifo, nothing left to in 'alloc_q', but if not,
> userspace frees 'alloc_q' in 'rte_kni_release()', with following call:
> `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
>
>
> I can see you have submitted fixes for this issue, although as I
> explained above I don't think a defect exist, I will review them
> today/tomorrow.
>
> Regards,
> Ferruh
>
>
> > Stephen Hemminger <stephen@networkplumber.org
> <mailto:stephen@networkplumber.org>
> > <mailto:stephen@networkplumber.org
> <mailto:stephen@networkplumber.org>>>, 8 May 2023 Pzt, 19:18 tarihinde
> > şunu yazdı:
> >
> > On Mon, 8 May 2023 09:01:41 +0300
> > Yasin CANER <yasinncaner@gmail.com
> <mailto:yasinncaner@gmail.com> <mailto:yasinncaner@gmail.com
> <mailto:yasinncaner@gmail.com>>>
> > wrote:
> >
> > > Hello Stephen,
> > >
> > > Thank you for response, it helps me a lot. I understand problem
> > better.
> > >
> > > After reading mbuf library (
> > > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
> > <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>) i
> > realized that
> > > 31 units allocation memory slot doesn't return to pool!
> >
> > If receive burst returns 1 mbuf, the other 31 pointers in the
> array
> > are not valid. They do not point to mbufs.
> >
> > > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back
> to pool.
> > >
> > > Main problem is that allocation doesn't return to original pool,
> > act as
> > > used. So, after following rte_pktmbuf_free
> > >
> >
> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>>>
> > > function,
> > > i realized that there is 2 function to helps to mbufs back
> to pool.
> > >
> > > These are rte_mbuf_raw_free
> > >
> >
> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>>>
> > > and rte_pktmbuf_free_seg
> > >
> >
> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>>>.
> > > I will focus on them.
> > >
> > > If there is another suggestion, I will be very pleased.
> > >
> > > Best regards.
> > >
> > > Yasin CANER
> > > Ulak
> >
>
next prev parent reply other threads:[~2023-05-18 14:56 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-08 6:01 Yasin CANER
2023-05-08 16:18 ` Stephen Hemminger
2023-05-09 11:13 ` Yasin CANER
2023-05-11 14:14 ` Yasin CANER
2023-05-17 17:53 ` Ferruh Yigit
2023-05-18 8:14 ` Yasin CANER
2023-05-18 14:56 ` Ferruh Yigit [this message]
2023-05-19 17:47 ` Yasin CANER
2023-05-19 18:43 ` Ferruh Yigit
2023-05-29 6:33 ` Yasin CANER
-- strict thread matches above, loose matches on Subject: below --
2023-05-04 7:32 Yasin CANER
2023-05-04 13:00 ` Yasin CANER
2023-05-04 16:14 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4f53f3be-0bae-e204-5737-7735b4a2ba5b@amd.com \
--to=ferruh.yigit@amd.com \
--cc=stephen@networkplumber.org \
--cc=users@dpdk.org \
--cc=yasinncaner@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).