DPDK usage discussions
 help / color / mirror / Atom feed
* DPDK 22.11 - How to fix memory leak for KNI - How to debug
@ 2023-05-08  6:01 Yasin CANER
  2023-05-08 16:18 ` Stephen Hemminger
  0 siblings, 1 reply; 13+ messages in thread
From: Yasin CANER @ 2023-05-08  6:01 UTC (permalink / raw)
  To: users, stephen

[-- Attachment #1: Type: text/plain, Size: 915 bytes --]

Hello Stephen,

Thank you for response, it helps me a lot. I understand problem better.

After reading mbuf library (
https://doc.dpdk.org/guides/prog_guide/mempool_lib.html)  i realized that
31 units allocation memory slot doesn't return to pool!

1 unit mbuf can be freed via rte_pktmbuf_free so it can back to pool.

Main problem is that allocation doesn't return to original pool, act as
used. So, after following rte_pktmbuf_free
<http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
function,
i realized that there is 2 function to helps to mbufs back to pool.

These are rte_mbuf_raw_free
<http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
 and rte_pktmbuf_free_seg
<http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>.
I will focus on them.

If there is another suggestion, I will be very pleased.

Best regards.

Yasin CANER
Ulak

[-- Attachment #2: Type: text/html, Size: 1569 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-08  6:01 DPDK 22.11 - How to fix memory leak for KNI - How to debug Yasin CANER
@ 2023-05-08 16:18 ` Stephen Hemminger
  2023-05-09 11:13   ` Yasin CANER
  0 siblings, 1 reply; 13+ messages in thread
From: Stephen Hemminger @ 2023-05-08 16:18 UTC (permalink / raw)
  To: Yasin CANER; +Cc: users

On Mon, 8 May 2023 09:01:41 +0300
Yasin CANER <yasinncaner@gmail.com> wrote:

> Hello Stephen,
> 
> Thank you for response, it helps me a lot. I understand problem better.
> 
> After reading mbuf library (
> https://doc.dpdk.org/guides/prog_guide/mempool_lib.html)  i realized that
> 31 units allocation memory slot doesn't return to pool!

If receive burst returns 1 mbuf, the other 31 pointers in the array
are not valid. They do not point to mbufs.

> 1 unit mbuf can be freed via rte_pktmbuf_free so it can back to pool.
> 
> Main problem is that allocation doesn't return to original pool, act as
> used. So, after following rte_pktmbuf_free
> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
> function,
> i realized that there is 2 function to helps to mbufs back to pool.
> 
> These are rte_mbuf_raw_free
> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
>  and rte_pktmbuf_free_seg
> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>.
> I will focus on them.
> 
> If there is another suggestion, I will be very pleased.
> 
> Best regards.
> 
> Yasin CANER
> Ulak


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-08 16:18 ` Stephen Hemminger
@ 2023-05-09 11:13   ` Yasin CANER
  2023-05-11 14:14     ` Yasin CANER
  2023-05-17 17:53     ` Ferruh Yigit
  0 siblings, 2 replies; 13+ messages in thread
From: Yasin CANER @ 2023-05-09 11:13 UTC (permalink / raw)
  To: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 2241 bytes --]

Hello,

I draw a flow via asciiflow to explain myself better. Problem is after
transmitting packets(mbufs) , it never puts in the kni->free_q to back to
the original pool. Each cycle, it allocates another 32 units that cause
leaks. Or I am missing something.

I already tried the rte_eth_tx_done_cleanup() function but it didn't fix
anything.

I am working on a patch to fix this issue but I am not sure if there
is another way.

Best regards.

https://pastebin.ubuntu.com/p/s4h5psqtgZ/


unsigned
rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int
num)
{
unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);

/* If buffers removed, allocate mbufs and then put them into alloc_q */
/* Question, how to test buffers is removed or not?*/
if (ret)
    kni_allocate_mbufs(kni);

return ret;
}

Stephen Hemminger <stephen@networkplumber.org>, 8 May 2023 Pzt, 19:18
tarihinde şunu yazdı:

> On Mon, 8 May 2023 09:01:41 +0300
> Yasin CANER <yasinncaner@gmail.com> wrote:
>
> > Hello Stephen,
> >
> > Thank you for response, it helps me a lot. I understand problem better.
> >
> > After reading mbuf library (
> > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html)  i realized
> that
> > 31 units allocation memory slot doesn't return to pool!
>
> If receive burst returns 1 mbuf, the other 31 pointers in the array
> are not valid. They do not point to mbufs.
>
> > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back to pool.
> >
> > Main problem is that allocation doesn't return to original pool, act as
> > used. So, after following rte_pktmbuf_free
> > <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> >
> > function,
> > i realized that there is 2 function to helps to mbufs back to pool.
> >
> > These are rte_mbuf_raw_free
> > <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> >
> >  and rte_pktmbuf_free_seg
> > <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> >.
> > I will focus on them.
> >
> > If there is another suggestion, I will be very pleased.
> >
> > Best regards.
> >
> > Yasin CANER
> > Ulak
>
>

[-- Attachment #2: Type: text/html, Size: 3491 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-09 11:13   ` Yasin CANER
@ 2023-05-11 14:14     ` Yasin CANER
  2023-05-17 17:53     ` Ferruh Yigit
  1 sibling, 0 replies; 13+ messages in thread
From: Yasin CANER @ 2023-05-11 14:14 UTC (permalink / raw)
  To: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 2662 bytes --]

Hello all,

 I fixed both bugs on my work computer. but it is hard to push a patch
because dpdk git has so many steps.

https://bugs.dpdk.org/show_bug.cgi?id=1227
https://bugs.dpdk.org/show_bug.cgi?id=1229

Best regards.

Yasin CANER <yasinncaner@gmail.com>, 9 May 2023 Sal, 14:13 tarihinde şunu
yazdı:

> Hello,
>
> I draw a flow via asciiflow to explain myself better. Problem is after
> transmitting packets(mbufs) , it never puts in the kni->free_q to back to
> the original pool. Each cycle, it allocates another 32 units that cause
> leaks. Or I am missing something.
>
> I already tried the rte_eth_tx_done_cleanup() function but it didn't fix
> anything.
>
> I am working on a patch to fix this issue but I am not sure if there
> is another way.
>
> Best regards.
>
> https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>
>
> unsigned
> rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned
> int num)
> {
> unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
>
> /* If buffers removed, allocate mbufs and then put them into alloc_q */
> /* Question, how to test buffers is removed or not?*/
> if (ret)
>     kni_allocate_mbufs(kni);
>
> return ret;
> }
>
> Stephen Hemminger <stephen@networkplumber.org>, 8 May 2023 Pzt, 19:18
> tarihinde şunu yazdı:
>
>> On Mon, 8 May 2023 09:01:41 +0300
>> Yasin CANER <yasinncaner@gmail.com> wrote:
>>
>> > Hello Stephen,
>> >
>> > Thank you for response, it helps me a lot. I understand problem better.
>> >
>> > After reading mbuf library (
>> > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html)  i realized
>> that
>> > 31 units allocation memory slot doesn't return to pool!
>>
>> If receive burst returns 1 mbuf, the other 31 pointers in the array
>> are not valid. They do not point to mbufs.
>>
>> > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back to pool.
>> >
>> > Main problem is that allocation doesn't return to original pool, act as
>> > used. So, after following rte_pktmbuf_free
>> > <
>> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
>> >
>> > function,
>> > i realized that there is 2 function to helps to mbufs back to pool.
>> >
>> > These are rte_mbuf_raw_free
>> > <
>> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
>> >
>> >  and rte_pktmbuf_free_seg
>> > <
>> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
>> >.
>> > I will focus on them.
>> >
>> > If there is another suggestion, I will be very pleased.
>> >
>> > Best regards.
>> >
>> > Yasin CANER
>> > Ulak
>>
>>

[-- Attachment #2: Type: text/html, Size: 4315 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-09 11:13   ` Yasin CANER
  2023-05-11 14:14     ` Yasin CANER
@ 2023-05-17 17:53     ` Ferruh Yigit
  2023-05-18  8:14       ` Yasin CANER
  1 sibling, 1 reply; 13+ messages in thread
From: Ferruh Yigit @ 2023-05-17 17:53 UTC (permalink / raw)
  To: Yasin CANER, Stephen Hemminger; +Cc: users

On 5/9/2023 12:13 PM, Yasin CANER wrote:
> Hello,
> 
> I draw a flow via asciiflow to explain myself better. Problem is after
> transmitting packets(mbufs) , it never puts in the kni->free_q to back
> to the original pool. Each cycle, it allocates another 32 units that
> cause leaks. Or I am missing something.
> 
> I already tried the rte_eth_tx_done_cleanup() function but it didn't fix
> anything.
> 
> I am working on a patch to fix this issue but I am not sure if there
> is another way.
> 
> Best regards.
> 
> https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> 
> 
> unsigned
> rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned
> int num)
> {
> unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
> 
> /* If buffers removed, allocate mbufs and then put them into alloc_q */
> /* Question, how to test buffers is removed or not?*/
> if (ret)
>     kni_allocate_mbufs(kni);
> 
> return ret;
> }
> 

Selam Yasin,


You can expect 'kni->alloc_q' fifo to be full, this is not a memory leak.

As you pointed out, number of mbufs consumed by kernel from 'alloc_q'
and number of mbufs added to 'alloc_q' is not equal and this is expected.

Target here is to prevent buffer underflow from kernel perspective, so
it will always have available mbufs for new packets.
That is why new mbufs are added to 'alloc_q' at worst same or sometimes
higher rate than it is consumed.

You should calculate your mbuf requirement with the assumption that
'kni->alloc_q' will be full of mbufs.


'kni->alloc_q' is freed when kni is removed.
Since 'alloc_q' holds physical address of the mbufs, it is a little
challenging to free them in the userspace, that is why first kernel
tries to move mbufs to 'kni->free_q' fifo, please check
'kni_net_release_fifo_phy()' for it.

If all moved to 'free_q' fifo, nothing left to in 'alloc_q', but if not,
userspace frees 'alloc_q' in 'rte_kni_release()', with following call:
`kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`


I can see you have submitted fixes for this issue, although as I
explained above I don't think a defect exist, I will review them
today/tomorrow.

Regards,
Ferruh


> Stephen Hemminger <stephen@networkplumber.org
> <mailto:stephen@networkplumber.org>>, 8 May 2023 Pzt, 19:18 tarihinde
> şunu yazdı:
> 
>     On Mon, 8 May 2023 09:01:41 +0300
>     Yasin CANER <yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>>
>     wrote:
> 
>     > Hello Stephen,
>     >
>     > Thank you for response, it helps me a lot. I understand problem
>     better.
>     >
>     > After reading mbuf library (
>     > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>)  i
>     realized that
>     > 31 units allocation memory slot doesn't return to pool!
> 
>     If receive burst returns 1 mbuf, the other 31 pointers in the array
>     are not valid. They do not point to mbufs.
> 
>     > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back to pool.
>     >
>     > Main problem is that allocation doesn't return to original pool,
>     act as
>     > used. So, after following rte_pktmbuf_free
>     >
>     <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>>
>     > function,
>     > i realized that there is 2 function to helps to mbufs back to pool.
>     >
>     > These are rte_mbuf_raw_free
>     >
>     <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>>
>     >  and rte_pktmbuf_free_seg
>     >
>     <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>>.
>     > I will focus on them.
>     >
>     > If there is another suggestion, I will be very pleased.
>     >
>     > Best regards.
>     >
>     > Yasin CANER
>     > Ulak
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-17 17:53     ` Ferruh Yigit
@ 2023-05-18  8:14       ` Yasin CANER
  2023-05-18 14:56         ` Ferruh Yigit
  0 siblings, 1 reply; 13+ messages in thread
From: Yasin CANER @ 2023-05-18  8:14 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 4937 bytes --]

Hello Ferruh,

Thanks for your kind response. Also thanks to Stephen.

Even if 1 packet is consumed from the kernel , each time rx_kni allocates
another 32 units. After a while all mempool is used in alloc_q from kni.
there is not any room for it.

Do you think my mistake is using one and common mempool usage both kni and
eth?

If it needs a separate mempool , i'd like to note in docs.

Best regards.

Ferruh Yigit <ferruh.yigit@amd.com>, 17 May 2023 Çar, 20:53 tarihinde şunu
yazdı:

> On 5/9/2023 12:13 PM, Yasin CANER wrote:
> > Hello,
> >
> > I draw a flow via asciiflow to explain myself better. Problem is after
> > transmitting packets(mbufs) , it never puts in the kni->free_q to back
> > to the original pool. Each cycle, it allocates another 32 units that
> > cause leaks. Or I am missing something.
> >
> > I already tried the rte_eth_tx_done_cleanup() function but it didn't fix
> > anything.
> >
> > I am working on a patch to fix this issue but I am not sure if there
> > is another way.
> >
> > Best regards.
> >
> > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> >
> >
> > unsigned
> > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned
> > int num)
> > {
> > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
> >
> > /* If buffers removed, allocate mbufs and then put them into alloc_q */
> > /* Question, how to test buffers is removed or not?*/
> > if (ret)
> >     kni_allocate_mbufs(kni);
> >
> > return ret;
> > }
> >
>
> Selam Yasin,
>
>
> You can expect 'kni->alloc_q' fifo to be full, this is not a memory leak.
>
> As you pointed out, number of mbufs consumed by kernel from 'alloc_q'
> and number of mbufs added to 'alloc_q' is not equal and this is expected.
>
> Target here is to prevent buffer underflow from kernel perspective, so
> it will always have available mbufs for new packets.
> That is why new mbufs are added to 'alloc_q' at worst same or sometimes
> higher rate than it is consumed.
>
> You should calculate your mbuf requirement with the assumption that
> 'kni->alloc_q' will be full of mbufs.
>
>
> 'kni->alloc_q' is freed when kni is removed.
> Since 'alloc_q' holds physical address of the mbufs, it is a little
> challenging to free them in the userspace, that is why first kernel
> tries to move mbufs to 'kni->free_q' fifo, please check
> 'kni_net_release_fifo_phy()' for it.
>
> If all moved to 'free_q' fifo, nothing left to in 'alloc_q', but if not,
> userspace frees 'alloc_q' in 'rte_kni_release()', with following call:
> `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
>
>
> I can see you have submitted fixes for this issue, although as I
> explained above I don't think a defect exist, I will review them
> today/tomorrow.
>
> Regards,
> Ferruh
>
>
> > Stephen Hemminger <stephen@networkplumber.org
> > <mailto:stephen@networkplumber.org>>, 8 May 2023 Pzt, 19:18 tarihinde
> > şunu yazdı:
> >
> >     On Mon, 8 May 2023 09:01:41 +0300
> >     Yasin CANER <yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>>
> >     wrote:
> >
> >     > Hello Stephen,
> >     >
> >     > Thank you for response, it helps me a lot. I understand problem
> >     better.
> >     >
> >     > After reading mbuf library (
> >     > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>)  i
> >     realized that
> >     > 31 units allocation memory slot doesn't return to pool!
> >
> >     If receive burst returns 1 mbuf, the other 31 pointers in the array
> >     are not valid. They do not point to mbufs.
> >
> >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back to
> pool.
> >     >
> >     > Main problem is that allocation doesn't return to original pool,
> >     act as
> >     > used. So, after following rte_pktmbuf_free
> >     >
> >     <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> >>
> >     > function,
> >     > i realized that there is 2 function to helps to mbufs back to pool.
> >     >
> >     > These are rte_mbuf_raw_free
> >     >
> >     <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> >>
> >     >  and rte_pktmbuf_free_seg
> >     >
> >     <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> >>.
> >     > I will focus on them.
> >     >
> >     > If there is another suggestion, I will be very pleased.
> >     >
> >     > Best regards.
> >     >
> >     > Yasin CANER
> >     > Ulak
> >
>
>

[-- Attachment #2: Type: text/html, Size: 7559 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-18  8:14       ` Yasin CANER
@ 2023-05-18 14:56         ` Ferruh Yigit
  2023-05-19 17:47           ` Yasin CANER
  0 siblings, 1 reply; 13+ messages in thread
From: Ferruh Yigit @ 2023-05-18 14:56 UTC (permalink / raw)
  To: Yasin CANER; +Cc: Stephen Hemminger, users

On 5/18/2023 9:14 AM, Yasin CANER wrote:
> Hello Ferruh,
> 
> Thanks for your kind response. Also thanks to Stephen.
> 
> Even if 1 packet is consumed from the kernel , each time rx_kni
> allocates another 32 units. After a while all mempool is used in alloc_q
> from kni. there is not any room for it.
> 

What you described continues until 'alloc_q' is full, by default fifo
length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your
mempool?

You can consider either increasing mempool size, or decreasing 'alloc_q'
fifo length, but reducing fifo size may cause performance issues so you
need to evaluate that option.

> Do you think my mistake is using one and common mempool usage both kni
> and eth?
> 

Using same mempool for both is fine.

> If it needs a separate mempool , i'd like to note in docs.
> 
> Best regards.
> 
> Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>, 17
> May 2023 Çar, 20:53 tarihinde şunu yazdı:
> 
>     On 5/9/2023 12:13 PM, Yasin CANER wrote:
>     > Hello,
>     >
>     > I draw a flow via asciiflow to explain myself better. Problem is after
>     > transmitting packets(mbufs) , it never puts in the kni->free_q to back
>     > to the original pool. Each cycle, it allocates another 32 units that
>     > cause leaks. Or I am missing something.
>     >
>     > I already tried the rte_eth_tx_done_cleanup() function but it
>     didn't fix
>     > anything.
>     >
>     > I am working on a patch to fix this issue but I am not sure if there
>     > is another way.
>     >
>     > Best regards.
>     >
>     > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
>     > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>
>     >
>     >
>     > unsigned
>     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
>     unsigned
>     > int num)
>     > {
>     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
>     >
>     > /* If buffers removed, allocate mbufs and then put them into
>     alloc_q */
>     > /* Question, how to test buffers is removed or not?*/
>     > if (ret)
>     >     kni_allocate_mbufs(kni);
>     >
>     > return ret;
>     > }
>     >
> 
>     Selam Yasin,
> 
> 
>     You can expect 'kni->alloc_q' fifo to be full, this is not a memory
>     leak.
> 
>     As you pointed out, number of mbufs consumed by kernel from 'alloc_q'
>     and number of mbufs added to 'alloc_q' is not equal and this is
>     expected.
> 
>     Target here is to prevent buffer underflow from kernel perspective, so
>     it will always have available mbufs for new packets.
>     That is why new mbufs are added to 'alloc_q' at worst same or sometimes
>     higher rate than it is consumed.
> 
>     You should calculate your mbuf requirement with the assumption that
>     'kni->alloc_q' will be full of mbufs.
> 
> 
>     'kni->alloc_q' is freed when kni is removed.
>     Since 'alloc_q' holds physical address of the mbufs, it is a little
>     challenging to free them in the userspace, that is why first kernel
>     tries to move mbufs to 'kni->free_q' fifo, please check
>     'kni_net_release_fifo_phy()' for it.
> 
>     If all moved to 'free_q' fifo, nothing left to in 'alloc_q', but if not,
>     userspace frees 'alloc_q' in 'rte_kni_release()', with following call:
>     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
> 
> 
>     I can see you have submitted fixes for this issue, although as I
>     explained above I don't think a defect exist, I will review them
>     today/tomorrow.
> 
>     Regards,
>     Ferruh
> 
> 
>     > Stephen Hemminger <stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>
>     > <mailto:stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>>>, 8 May 2023 Pzt, 19:18 tarihinde
>     > şunu yazdı:
>     >
>     >     On Mon, 8 May 2023 09:01:41 +0300
>     >     Yasin CANER <yasinncaner@gmail.com
>     <mailto:yasinncaner@gmail.com> <mailto:yasinncaner@gmail.com
>     <mailto:yasinncaner@gmail.com>>>
>     >     wrote:
>     >
>     >     > Hello Stephen,
>     >     >
>     >     > Thank you for response, it helps me a lot. I understand problem
>     >     better.
>     >     >
>     >     > After reading mbuf library (
>     >     > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
>     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>)  i
>     >     realized that
>     >     > 31 units allocation memory slot doesn't return to pool!
>     >
>     >     If receive burst returns 1 mbuf, the other 31 pointers in the
>     array
>     >     are not valid. They do not point to mbufs.
>     >
>     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back
>     to pool.
>     >     >
>     >     > Main problem is that allocation doesn't return to original pool,
>     >     act as
>     >     > used. So, after following rte_pktmbuf_free
>     >     >
>     >   
>      <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>>>
>     >     > function,
>     >     > i realized that there is 2 function to helps to mbufs back
>     to pool.
>     >     >
>     >     > These are rte_mbuf_raw_free
>     >     >
>     >   
>      <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>>>
>     >     >  and rte_pktmbuf_free_seg
>     >     >
>     >   
>      <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>>>.
>     >     > I will focus on them.
>     >     >
>     >     > If there is another suggestion, I will be very pleased.
>     >     >
>     >     > Best regards.
>     >     >
>     >     > Yasin CANER
>     >     > Ulak
>     >
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-18 14:56         ` Ferruh Yigit
@ 2023-05-19 17:47           ` Yasin CANER
  2023-05-19 18:43             ` Ferruh Yigit
  0 siblings, 1 reply; 13+ messages in thread
From: Yasin CANER @ 2023-05-19 17:47 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 15224 bytes --]

Hello,

I tested all day both before and after patching.

I could not understand that it is a memory leak or not. Maybe it needs
optimization. You lead, I follow.

1-) You are right, alloc_q is never bigger than 1024.  But it always
allocates 32 units then more than 1024 are being freed. Maybe it takes
time, I don't know.

2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories are
back to mempool (most of them). (driver virtio and eth-devices are binded
via igb_uio) . It really takes time. So it is better to increase the size
of the mempool. (https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html)

3-) try to list mempool state in randomly

Test -1 -) (old code) ICMP testing. The whole mempool size is about 10350.
So after FIFO reaches max-size -1024, %10 of the size of the mempool is in
use. But little by little memory is waiting in use and doesn't go back to
the pool. I could not find the reason.

MBUF_POOL                      448            9,951
 4.31% [|....................]
MBUF_POOL                      1,947          8,452
18.72% [||||.................]
MBUF_POOL                      1,803          8,596
17.34% [||||.................]
MBUF_POOL                      1,941          8,458
18.67% [||||.................]
MBUF_POOL                      1,900          8,499
18.27% [||||.................]
MBUF_POOL                      1,999          8,400
19.22% [||||.................]
MBUF_POOL                      1,724          8,675
16.58% [||||.................]
MBUF_POOL                      1,811          8,588
17.42% [||||.................]
MBUF_POOL                      1,978          8,421
19.02% [||||.................]
MBUF_POOL                      2,008          8,391
19.31% [||||.................]
MBUF_POOL                      1,854          8,545
17.83% [||||.................]
MBUF_POOL                      1,922          8,477
18.48% [||||.................]
MBUF_POOL                      1,892          8,507
18.19% [||||.................]
MBUF_POOL                      1,957          8,442
18.82% [||||.................]

Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth device.
Waited to see what happens in 4 min. memory doesn't go back to the mempool.
little by little, memory usage increases.

MBUF_POOL                      512            9,887
 4.92% [|....................]
MBUF_POOL                      1,411          8,988
13.57% [|||..................]
MBUF_POOL                      1,390          9,009
13.37% [|||..................]
MBUF_POOL                      1,558          8,841
14.98% [|||..................]
MBUF_POOL                      1,453          8,946
13.97% [|||..................]
MBUF_POOL                      1,525          8,874
14.66% [|||..................]
MBUF_POOL                      1,592          8,807
15.31% [||||.................]
MBUF_POOL                      1,639          8,760
15.76% [||||.................]
MBUF_POOL                      1,624          8,775
15.62% [||||.................]
MBUF_POOL                      1,618          8,781
15.56% [||||.................]
MBUF_POOL                      1,708          8,691
16.42% [||||.................]
iperf is STOPPED to tx_fresh for 4 min
MBUF_POOL                      1,709          8,690
16.43% [||||.................]
iperf is STOPPED to tx_fresh for 4 min
MBUF_POOL                      1,709          8,690
16.43% [||||.................]
MBUF_POOL                      1,683          8,716
16.18% [||||.................]
MBUF_POOL                      1,563          8,836
15.03% [||||.................]
MBUF_POOL                      1,726          8,673
16.60% [||||.................]
MBUF_POOL                      1,589          8,810
15.28% [||||.................]
MBUF_POOL                      1,556          8,843
14.96% [|||..................]
MBUF_POOL                      1,610          8,789
15.48% [||||.................]
MBUF_POOL                      1,616          8,783
15.54% [||||.................]
MBUF_POOL                      1,709          8,690
16.43% [||||.................]
MBUF_POOL                      1,740          8,659
16.73% [||||.................]
MBUF_POOL                      1,546          8,853
14.87% [|||..................]
MBUF_POOL                      1,710          8,689
16.44% [||||.................]
MBUF_POOL                      1,787          8,612
17.18% [||||.................]
MBUF_POOL                      1,579          8,820
15.18% [||||.................]
MBUF_POOL                      1,780          8,619
17.12% [||||.................]
MBUF_POOL                      1,679          8,720
16.15% [||||.................]
MBUF_POOL                      1,604          8,795
15.42% [||||.................]
MBUF_POOL                      1,761          8,638
16.93% [||||.................]
MBUF_POOL                      1,773          8,626
17.05% [||||.................]

Test-3 -) (after patching)  run iperf3 udp testing that from Kernel to eth
device. looks stable.
After patching ,

MBUF_POOL                      76             10,323
0.73% [|....................]
MBUF_POOL                      193            10,206
1.86% [|....................]
MBUF_POOL                      96             10,303
0.92% [|....................]
MBUF_POOL                      269            10,130
2.59% [|....................]
MBUF_POOL                      102            10,297
0.98% [|....................]
MBUF_POOL                      235            10,164
2.26% [|....................]
MBUF_POOL                      87             10,312
0.84% [|....................]
MBUF_POOL                      293            10,106
2.82% [|....................]
MBUF_POOL                      99             10,300
0.95% [|....................]
MBUF_POOL                      296            10,103
2.85% [|....................]
MBUF_POOL                      90             10,309
0.87% [|....................]
MBUF_POOL                      299            10,100
2.88% [|....................]
MBUF_POOL                      86             10,313
0.83% [|....................]
MBUF_POOL                      262            10,137
2.52% [|....................]
MBUF_POOL                      81             10,318
0.78% [|....................]
MBUF_POOL                      81             10,318
0.78% [|....................]
MBUF_POOL                      87             10,312
0.84% [|....................]
MBUF_POOL                      252            10,147
2.42% [|....................]
MBUF_POOL                      97             10,302
0.93% [|....................]
iperf is STOPPED to tx_fresh for 4 min
MBUF_POOL                      296            10,103
2.85% [|....................]
MBUF_POOL                      95             10,304
0.91% [|....................]
MBUF_POOL                      269            10,130
2.59% [|....................]
MBUF_POOL                      302            10,097
2.90% [|....................]
MBUF_POOL                      88             10,311
0.85% [|....................]
MBUF_POOL                      305            10,094
2.93% [|....................]
MBUF_POOL                      88             10,311
0.85% [|....................]
MBUF_POOL                      290            10,109
2.79% [|....................]
MBUF_POOL                      84             10,315
0.81% [|....................]
MBUF_POOL                      85             10,314
0.82% [|....................]
MBUF_POOL                      291            10,108
2.80% [|....................]
MBUF_POOL                      303            10,096
2.91% [|....................]
MBUF_POOL                      92             10,307
0.88% [|....................]


Best regards.


Ferruh Yigit <ferruh.yigit@amd.com>, 18 May 2023 Per, 17:56 tarihinde şunu
yazdı:

> On 5/18/2023 9:14 AM, Yasin CANER wrote:
> > Hello Ferruh,
> >
> > Thanks for your kind response. Also thanks to Stephen.
> >
> > Even if 1 packet is consumed from the kernel , each time rx_kni
> > allocates another 32 units. After a while all mempool is used in alloc_q
> > from kni. there is not any room for it.
> >
>
> What you described continues until 'alloc_q' is full, by default fifo
> length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your
> mempool?
>
> You can consider either increasing mempool size, or decreasing 'alloc_q'
> fifo length, but reducing fifo size may cause performance issues so you
> need to evaluate that option.
>
> > Do you think my mistake is using one and common mempool usage both kni
> > and eth?
> >
>
> Using same mempool for both is fine.
>
> > If it needs a separate mempool , i'd like to note in docs.
> >
> > Best regards.
> >
> > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>, 17
> > May 2023 Çar, 20:53 tarihinde şunu yazdı:
> >
> >     On 5/9/2023 12:13 PM, Yasin CANER wrote:
> >     > Hello,
> >     >
> >     > I draw a flow via asciiflow to explain myself better. Problem is
> after
> >     > transmitting packets(mbufs) , it never puts in the kni->free_q to
> back
> >     > to the original pool. Each cycle, it allocates another 32 units
> that
> >     > cause leaks. Or I am missing something.
> >     >
> >     > I already tried the rte_eth_tx_done_cleanup() function but it
> >     didn't fix
> >     > anything.
> >     >
> >     > I am working on a patch to fix this issue but I am not sure if
> there
> >     > is another way.
> >     >
> >     > Best regards.
> >     >
> >     > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> >     > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>
> >     >
> >     >
> >     > unsigned
> >     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
> >     unsigned
> >     > int num)
> >     > {
> >     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
> >     >
> >     > /* If buffers removed, allocate mbufs and then put them into
> >     alloc_q */
> >     > /* Question, how to test buffers is removed or not?*/
> >     > if (ret)
> >     >     kni_allocate_mbufs(kni);
> >     >
> >     > return ret;
> >     > }
> >     >
> >
> >     Selam Yasin,
> >
> >
> >     You can expect 'kni->alloc_q' fifo to be full, this is not a memory
> >     leak.
> >
> >     As you pointed out, number of mbufs consumed by kernel from 'alloc_q'
> >     and number of mbufs added to 'alloc_q' is not equal and this is
> >     expected.
> >
> >     Target here is to prevent buffer underflow from kernel perspective,
> so
> >     it will always have available mbufs for new packets.
> >     That is why new mbufs are added to 'alloc_q' at worst same or
> sometimes
> >     higher rate than it is consumed.
> >
> >     You should calculate your mbuf requirement with the assumption that
> >     'kni->alloc_q' will be full of mbufs.
> >
> >
> >     'kni->alloc_q' is freed when kni is removed.
> >     Since 'alloc_q' holds physical address of the mbufs, it is a little
> >     challenging to free them in the userspace, that is why first kernel
> >     tries to move mbufs to 'kni->free_q' fifo, please check
> >     'kni_net_release_fifo_phy()' for it.
> >
> >     If all moved to 'free_q' fifo, nothing left to in 'alloc_q', but if
> not,
> >     userspace frees 'alloc_q' in 'rte_kni_release()', with following
> call:
> >     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
> >
> >
> >     I can see you have submitted fixes for this issue, although as I
> >     explained above I don't think a defect exist, I will review them
> >     today/tomorrow.
> >
> >     Regards,
> >     Ferruh
> >
> >
> >     > Stephen Hemminger <stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>
> >     > <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>>>, 8 May 2023 Pzt, 19:18
> tarihinde
> >     > şunu yazdı:
> >     >
> >     >     On Mon, 8 May 2023 09:01:41 +0300
> >     >     Yasin CANER <yasinncaner@gmail.com
> >     <mailto:yasinncaner@gmail.com> <mailto:yasinncaner@gmail.com
> >     <mailto:yasinncaner@gmail.com>>>
> >     >     wrote:
> >     >
> >     >     > Hello Stephen,
> >     >     >
> >     >     > Thank you for response, it helps me a lot. I understand
> problem
> >     >     better.
> >     >     >
> >     >     > After reading mbuf library (
> >     >     > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
> >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>)  i
> >     >     realized that
> >     >     > 31 units allocation memory slot doesn't return to pool!
> >     >
> >     >     If receive burst returns 1 mbuf, the other 31 pointers in the
> >     array
> >     >     are not valid. They do not point to mbufs.
> >     >
> >     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back
> >     to pool.
> >     >     >
> >     >     > Main problem is that allocation doesn't return to original
> pool,
> >     >     act as
> >     >     > used. So, after following rte_pktmbuf_free
> >     >     >
> >     >
> >      <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> >>>
> >     >     > function,
> >     >     > i realized that there is 2 function to helps to mbufs back
> >     to pool.
> >     >     >
> >     >     > These are rte_mbuf_raw_free
> >     >     >
> >     >
> >      <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> >>>
> >     >     >  and rte_pktmbuf_free_seg
> >     >     >
> >     >
> >      <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> >>>.
> >     >     > I will focus on them.
> >     >     >
> >     >     > If there is another suggestion, I will be very pleased.
> >     >     >
> >     >     > Best regards.
> >     >     >
> >     >     > Yasin CANER
> >     >     > Ulak
> >     >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 24249 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-19 17:47           ` Yasin CANER
@ 2023-05-19 18:43             ` Ferruh Yigit
  2023-05-29  6:33               ` Yasin CANER
  0 siblings, 1 reply; 13+ messages in thread
From: Ferruh Yigit @ 2023-05-19 18:43 UTC (permalink / raw)
  To: Yasin CANER; +Cc: Stephen Hemminger, users

On 5/19/2023 6:47 PM, Yasin CANER wrote:
> Hello,
> 

Hi,

Can you please bottom-post, combination of both makes discussion very
hard to follow?

> I tested all day both before and after patching.
> 
> I could not understand that it is a memory leak or not. Maybe it needs
> optimization. You lead, I follow.
> 
> 1-) You are right, alloc_q is never bigger than 1024.  But it always
> allocates 32 units then more than 1024 are being freed. Maybe it takes
> time, I don't know.
> 

At least alloc_q is only freed on kni release, so mbufs in that fifo can
sit there as long as application is running.

> 2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories
> are back to mempool (most of them). (driver virtio and eth-devices are
> binded via igb_uio) . It really takes time. So it is better to increase
> the size of the mempool.
> (https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html
> <https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html>)
> 
> 3-) try to list mempool state in randomly
> 

It looks number of mbufs used seems increasing, but in worst case both
alloc_q and free_q can be full, which makes 2048 mbufs, and in below
tests used mbufs number is not bigger than this value, so looks OK.
If you run your test for a longer duration, do you observe that used
mbufs going much above this number?

Also what are the 'num' parameter to 'rte_kni_tx_burst()' API?
If it is bigger than 'MAX_MBUF_BURST_NUM', that may lead mbufs
accumulate at free_q fifo.


As experiment, it is possible to decrease KNI fifo sizes, and observe
the result.


> Test -1 -) (old code) ICMP testing. The whole mempool size is about
> 10350. So after FIFO reaches max-size -1024, %10 of the size of the
> mempool is in use. But little by little memory is waiting in use and
> doesn't go back to the pool. I could not find the reason.
> 
> MBUF_POOL                      448            9,951                    
>  4.31% [|....................]
> MBUF_POOL                      1,947          8,452                    
> 18.72% [||||.................]
> MBUF_POOL                      1,803          8,596                    
> 17.34% [||||.................]
> MBUF_POOL                      1,941          8,458                    
> 18.67% [||||.................]
> MBUF_POOL                      1,900          8,499                    
> 18.27% [||||.................]
> MBUF_POOL                      1,999          8,400                    
> 19.22% [||||.................]
> MBUF_POOL                      1,724          8,675                    
> 16.58% [||||.................]
> MBUF_POOL                      1,811          8,588                    
> 17.42% [||||.................]
> MBUF_POOL                      1,978          8,421                    
> 19.02% [||||.................]
> MBUF_POOL                      2,008          8,391                    
> 19.31% [||||.................]
> MBUF_POOL                      1,854          8,545                    
> 17.83% [||||.................]
> MBUF_POOL                      1,922          8,477                    
> 18.48% [||||.................]
> MBUF_POOL                      1,892          8,507                    
> 18.19% [||||.................]
> MBUF_POOL                      1,957          8,442                    
> 18.82% [||||.................]
> 
> Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth
> device. Waited to see what happens in 4 min. memory doesn't go back to
> the mempool. little by little, memory usage increases.
> 
> MBUF_POOL                      512            9,887                    
>  4.92% [|....................]
> MBUF_POOL                      1,411          8,988                    
> 13.57% [|||..................]
> MBUF_POOL                      1,390          9,009                    
> 13.37% [|||..................]
> MBUF_POOL                      1,558          8,841                    
> 14.98% [|||..................]
> MBUF_POOL                      1,453          8,946                    
> 13.97% [|||..................]
> MBUF_POOL                      1,525          8,874                    
> 14.66% [|||..................]
> MBUF_POOL                      1,592          8,807                    
> 15.31% [||||.................]
> MBUF_POOL                      1,639          8,760                    
> 15.76% [||||.................]
> MBUF_POOL                      1,624          8,775                    
> 15.62% [||||.................]
> MBUF_POOL                      1,618          8,781                    
> 15.56% [||||.................]
> MBUF_POOL                      1,708          8,691                    
> 16.42% [||||.................]
> iperf is STOPPED to tx_fresh for 4 min
> MBUF_POOL                      1,709          8,690                    
> 16.43% [||||.................]
> iperf is STOPPED to tx_fresh for 4 min
> MBUF_POOL                      1,709          8,690                    
> 16.43% [||||.................]
> MBUF_POOL                      1,683          8,716                    
> 16.18% [||||.................]
> MBUF_POOL                      1,563          8,836                    
> 15.03% [||||.................]
> MBUF_POOL                      1,726          8,673                    
> 16.60% [||||.................]
> MBUF_POOL                      1,589          8,810                    
> 15.28% [||||.................]
> MBUF_POOL                      1,556          8,843                    
> 14.96% [|||..................]
> MBUF_POOL                      1,610          8,789                    
> 15.48% [||||.................]
> MBUF_POOL                      1,616          8,783                    
> 15.54% [||||.................]
> MBUF_POOL                      1,709          8,690                    
> 16.43% [||||.................]
> MBUF_POOL                      1,740          8,659                    
> 16.73% [||||.................]
> MBUF_POOL                      1,546          8,853                    
> 14.87% [|||..................]
> MBUF_POOL                      1,710          8,689                    
> 16.44% [||||.................]
> MBUF_POOL                      1,787          8,612                    
> 17.18% [||||.................]
> MBUF_POOL                      1,579          8,820                    
> 15.18% [||||.................]
> MBUF_POOL                      1,780          8,619                    
> 17.12% [||||.................]
> MBUF_POOL                      1,679          8,720                    
> 16.15% [||||.................]
> MBUF_POOL                      1,604          8,795                    
> 15.42% [||||.................]
> MBUF_POOL                      1,761          8,638                    
> 16.93% [||||.................]
> MBUF_POOL                      1,773          8,626                    
> 17.05% [||||.................]
> 
> Test-3 -) (after patching)  run iperf3 udp testing that from Kernel to
> eth device. looks stable.
> After patching ,
> 
> MBUF_POOL                      76             10,323                    
> 0.73% [|....................]
> MBUF_POOL                      193            10,206                    
> 1.86% [|....................]
> MBUF_POOL                      96             10,303                    
> 0.92% [|....................]
> MBUF_POOL                      269            10,130                    
> 2.59% [|....................]
> MBUF_POOL                      102            10,297                    
> 0.98% [|....................]
> MBUF_POOL                      235            10,164                    
> 2.26% [|....................]
> MBUF_POOL                      87             10,312                    
> 0.84% [|....................]
> MBUF_POOL                      293            10,106                    
> 2.82% [|....................]
> MBUF_POOL                      99             10,300                    
> 0.95% [|....................]
> MBUF_POOL                      296            10,103                    
> 2.85% [|....................]
> MBUF_POOL                      90             10,309                    
> 0.87% [|....................]
> MBUF_POOL                      299            10,100                    
> 2.88% [|....................]
> MBUF_POOL                      86             10,313                    
> 0.83% [|....................]
> MBUF_POOL                      262            10,137                    
> 2.52% [|....................]
> MBUF_POOL                      81             10,318                    
> 0.78% [|....................]
> MBUF_POOL                      81             10,318                    
> 0.78% [|....................]
> MBUF_POOL                      87             10,312                    
> 0.84% [|....................]
> MBUF_POOL                      252            10,147                    
> 2.42% [|....................]
> MBUF_POOL                      97             10,302                    
> 0.93% [|....................]
> iperf is STOPPED to tx_fresh for 4 min
> MBUF_POOL                      296            10,103                    
> 2.85% [|....................]
> MBUF_POOL                      95             10,304                    
> 0.91% [|....................]
> MBUF_POOL                      269            10,130                    
> 2.59% [|....................]
> MBUF_POOL                      302            10,097                    
> 2.90% [|....................]
> MBUF_POOL                      88             10,311                    
> 0.85% [|....................]
> MBUF_POOL                      305            10,094                    
> 2.93% [|....................]
> MBUF_POOL                      88             10,311                    
> 0.85% [|....................]
> MBUF_POOL                      290            10,109                    
> 2.79% [|....................]
> MBUF_POOL                      84             10,315                    
> 0.81% [|....................]
> MBUF_POOL                      85             10,314                    
> 0.82% [|....................]
> MBUF_POOL                      291            10,108                    
> 2.80% [|....................]
> MBUF_POOL                      303            10,096                    
> 2.91% [|....................]
> MBUF_POOL                      92             10,307                    
> 0.88% [|....................]
> 
> 
> Best regards.
> 
> 
> Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>, 18
> May 2023 Per, 17:56 tarihinde şunu yazdı:
> 
>     On 5/18/2023 9:14 AM, Yasin CANER wrote:
>     > Hello Ferruh,
>     >
>     > Thanks for your kind response. Also thanks to Stephen.
>     >
>     > Even if 1 packet is consumed from the kernel , each time rx_kni
>     > allocates another 32 units. After a while all mempool is used in
>     alloc_q
>     > from kni. there is not any room for it.
>     >
> 
>     What you described continues until 'alloc_q' is full, by default fifo
>     length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your
>     mempool?
> 
>     You can consider either increasing mempool size, or decreasing 'alloc_q'
>     fifo length, but reducing fifo size may cause performance issues so you
>     need to evaluate that option.
> 
>     > Do you think my mistake is using one and common mempool usage both kni
>     > and eth?
>     >
> 
>     Using same mempool for both is fine.
> 
>     > If it needs a separate mempool , i'd like to note in docs.
>     >
>     > Best regards.
>     >
>     > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>
>     <mailto:ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>>, 17
>     > May 2023 Çar, 20:53 tarihinde şunu yazdı:
>     >
>     >     On 5/9/2023 12:13 PM, Yasin CANER wrote:
>     >     > Hello,
>     >     >
>     >     > I draw a flow via asciiflow to explain myself better.
>     Problem is after
>     >     > transmitting packets(mbufs) , it never puts in the
>     kni->free_q to back
>     >     > to the original pool. Each cycle, it allocates another 32
>     units that
>     >     > cause leaks. Or I am missing something.
>     >     >
>     >     > I already tried the rte_eth_tx_done_cleanup() function but it
>     >     didn't fix
>     >     > anything.
>     >     >
>     >     > I am working on a patch to fix this issue but I am not sure
>     if there
>     >     > is another way.
>     >     >
>     >     > Best regards.
>     >     >
>     >     > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
>     >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>
>     >     > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
>     >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
>     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>>
>     >     >
>     >     >
>     >     > unsigned
>     >     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
>     >     unsigned
>     >     > int num)
>     >     > {
>     >     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
>     >     >
>     >     > /* If buffers removed, allocate mbufs and then put them into
>     >     alloc_q */
>     >     > /* Question, how to test buffers is removed or not?*/
>     >     > if (ret)
>     >     >     kni_allocate_mbufs(kni);
>     >     >
>     >     > return ret;
>     >     > }
>     >     >
>     >
>     >     Selam Yasin,
>     >
>     >
>     >     You can expect 'kni->alloc_q' fifo to be full, this is not a
>     memory
>     >     leak.
>     >
>     >     As you pointed out, number of mbufs consumed by kernel from
>     'alloc_q'
>     >     and number of mbufs added to 'alloc_q' is not equal and this is
>     >     expected.
>     >
>     >     Target here is to prevent buffer underflow from kernel
>     perspective, so
>     >     it will always have available mbufs for new packets.
>     >     That is why new mbufs are added to 'alloc_q' at worst same or
>     sometimes
>     >     higher rate than it is consumed.
>     >
>     >     You should calculate your mbuf requirement with the assumption
>     that
>     >     'kni->alloc_q' will be full of mbufs.
>     >
>     >
>     >     'kni->alloc_q' is freed when kni is removed.
>     >     Since 'alloc_q' holds physical address of the mbufs, it is a
>     little
>     >     challenging to free them in the userspace, that is why first
>     kernel
>     >     tries to move mbufs to 'kni->free_q' fifo, please check
>     >     'kni_net_release_fifo_phy()' for it.
>     >
>     >     If all moved to 'free_q' fifo, nothing left to in 'alloc_q',
>     but if not,
>     >     userspace frees 'alloc_q' in 'rte_kni_release()', with
>     following call:
>     >     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
>     >
>     >
>     >     I can see you have submitted fixes for this issue, although as I
>     >     explained above I don't think a defect exist, I will review them
>     >     today/tomorrow.
>     >
>     >     Regards,
>     >     Ferruh
>     >
>     >
>     >     > Stephen Hemminger <stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>
>     >     <mailto:stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>>
>     >     > <mailto:stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>
>     >     <mailto:stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>>>>, 8 May 2023 Pzt, 19:18 tarihinde
>     >     > şunu yazdı:
>     >     >
>     >     >     On Mon, 8 May 2023 09:01:41 +0300
>     >     >     Yasin CANER <yasinncaner@gmail.com
>     <mailto:yasinncaner@gmail.com>
>     >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>>
>     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>
>     >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>>>>
>     >     >     wrote:
>     >     >
>     >     >     > Hello Stephen,
>     >     >     >
>     >     >     > Thank you for response, it helps me a lot. I
>     understand problem
>     >     >     better.
>     >     >     >
>     >     >     > After reading mbuf library (
>     >     >     >
>     https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
>     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>
>     >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
>     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
>     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>>)  i
>     >     >     realized that
>     >     >     > 31 units allocation memory slot doesn't return to pool!
>     >     >
>     >     >     If receive burst returns 1 mbuf, the other 31 pointers
>     in the
>     >     array
>     >     >     are not valid. They do not point to mbufs.
>     >     >
>     >     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it
>     can back
>     >     to pool.
>     >     >     >
>     >     >     > Main problem is that allocation doesn't return to
>     original pool,
>     >     >     act as
>     >     >     > used. So, after following rte_pktmbuf_free
>     >     >     >
>     >     >   
>     >   
>       <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>>>>
>     >     >     > function,
>     >     >     > i realized that there is 2 function to helps to mbufs back
>     >     to pool.
>     >     >     >
>     >     >     > These are rte_mbuf_raw_free
>     >     >     >
>     >     >   
>     >   
>       <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>>>>
>     >     >     >  and rte_pktmbuf_free_seg
>     >     >     >
>     >     >   
>     >   
>       <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37> <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37 <http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>>>>.
>     >     >     > I will focus on them.
>     >     >     >
>     >     >     > If there is another suggestion, I will be very pleased.
>     >     >     >
>     >     >     > Best regards.
>     >     >     >
>     >     >     > Yasin CANER
>     >     >     > Ulak
>     >     >
>     >
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-19 18:43             ` Ferruh Yigit
@ 2023-05-29  6:33               ` Yasin CANER
  0 siblings, 0 replies; 13+ messages in thread
From: Yasin CANER @ 2023-05-29  6:33 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 20904 bytes --]

Hello all,

I never stop testing to see results.  It has been 10 days. After patching,
no leak.

MBUF_POOL                      82             10,317
0.79% [|....................]
MBUF_POOL                      83             10,316
0.80% [|....................]
MBUF_POOL                      93             10,306
0.89% [|....................]

Sometimes, it takes time to get back to mempool. In my opinion, it is about
the OVS-DPDK/openstack environment issue.  If I have a chance, try to run
an Intel Bare-metal environment.

After meeting with Ferruh, he explained concerns about performance issues
so I decided to continue manual patching for my application.

It is removed from bugzilla.

For your information.
Best regards.

Ferruh Yigit <ferruh.yigit@amd.com>, 19 May 2023 Cum, 21:43 tarihinde şunu
yazdı:

> On 5/19/2023 6:47 PM, Yasin CANER wrote:
> > Hello,
> >
>
> Hi,
>
> Can you please bottom-post, combination of both makes discussion very
> hard to follow?
>
> > I tested all day both before and after patching.
> >
> > I could not understand that it is a memory leak or not. Maybe it needs
> > optimization. You lead, I follow.
> >
> > 1-) You are right, alloc_q is never bigger than 1024.  But it always
> > allocates 32 units then more than 1024 are being freed. Maybe it takes
> > time, I don't know.
> >
>
> At least alloc_q is only freed on kni release, so mbufs in that fifo can
> sit there as long as application is running.
>
> > 2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories
> > are back to mempool (most of them). (driver virtio and eth-devices are
> > binded via igb_uio) . It really takes time. So it is better to increase
> > the size of the mempool.
> > (https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html
> > <https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html>)
> >
> > 3-) try to list mempool state in randomly
> >
>
> It looks number of mbufs used seems increasing, but in worst case both
> alloc_q and free_q can be full, which makes 2048 mbufs, and in below
> tests used mbufs number is not bigger than this value, so looks OK.
> If you run your test for a longer duration, do you observe that used
> mbufs going much above this number?
>
> Also what are the 'num' parameter to 'rte_kni_tx_burst()' API?
> If it is bigger than 'MAX_MBUF_BURST_NUM', that may lead mbufs
> accumulate at free_q fifo.
>
>
> As experiment, it is possible to decrease KNI fifo sizes, and observe
> the result.
>
>
> > Test -1 -) (old code) ICMP testing. The whole mempool size is about
> > 10350. So after FIFO reaches max-size -1024, %10 of the size of the
> > mempool is in use. But little by little memory is waiting in use and
> > doesn't go back to the pool. I could not find the reason.
> >
> > MBUF_POOL                      448            9,951
> >  4.31% [|....................]
> > MBUF_POOL                      1,947          8,452
> > 18.72% [||||.................]
> > MBUF_POOL                      1,803          8,596
> > 17.34% [||||.................]
> > MBUF_POOL                      1,941          8,458
> > 18.67% [||||.................]
> > MBUF_POOL                      1,900          8,499
> > 18.27% [||||.................]
> > MBUF_POOL                      1,999          8,400
> > 19.22% [||||.................]
> > MBUF_POOL                      1,724          8,675
> > 16.58% [||||.................]
> > MBUF_POOL                      1,811          8,588
> > 17.42% [||||.................]
> > MBUF_POOL                      1,978          8,421
> > 19.02% [||||.................]
> > MBUF_POOL                      2,008          8,391
> > 19.31% [||||.................]
> > MBUF_POOL                      1,854          8,545
> > 17.83% [||||.................]
> > MBUF_POOL                      1,922          8,477
> > 18.48% [||||.................]
> > MBUF_POOL                      1,892          8,507
> > 18.19% [||||.................]
> > MBUF_POOL                      1,957          8,442
> > 18.82% [||||.................]
> >
> > Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth
> > device. Waited to see what happens in 4 min. memory doesn't go back to
> > the mempool. little by little, memory usage increases.
> >
> > MBUF_POOL                      512            9,887
> >  4.92% [|....................]
> > MBUF_POOL                      1,411          8,988
> > 13.57% [|||..................]
> > MBUF_POOL                      1,390          9,009
> > 13.37% [|||..................]
> > MBUF_POOL                      1,558          8,841
> > 14.98% [|||..................]
> > MBUF_POOL                      1,453          8,946
> > 13.97% [|||..................]
> > MBUF_POOL                      1,525          8,874
> > 14.66% [|||..................]
> > MBUF_POOL                      1,592          8,807
> > 15.31% [||||.................]
> > MBUF_POOL                      1,639          8,760
> > 15.76% [||||.................]
> > MBUF_POOL                      1,624          8,775
> > 15.62% [||||.................]
> > MBUF_POOL                      1,618          8,781
> > 15.56% [||||.................]
> > MBUF_POOL                      1,708          8,691
> > 16.42% [||||.................]
> > iperf is STOPPED to tx_fresh for 4 min
> > MBUF_POOL                      1,709          8,690
> > 16.43% [||||.................]
> > iperf is STOPPED to tx_fresh for 4 min
> > MBUF_POOL                      1,709          8,690
> > 16.43% [||||.................]
> > MBUF_POOL                      1,683          8,716
> > 16.18% [||||.................]
> > MBUF_POOL                      1,563          8,836
> > 15.03% [||||.................]
> > MBUF_POOL                      1,726          8,673
> > 16.60% [||||.................]
> > MBUF_POOL                      1,589          8,810
> > 15.28% [||||.................]
> > MBUF_POOL                      1,556          8,843
> > 14.96% [|||..................]
> > MBUF_POOL                      1,610          8,789
> > 15.48% [||||.................]
> > MBUF_POOL                      1,616          8,783
> > 15.54% [||||.................]
> > MBUF_POOL                      1,709          8,690
> > 16.43% [||||.................]
> > MBUF_POOL                      1,740          8,659
> > 16.73% [||||.................]
> > MBUF_POOL                      1,546          8,853
> > 14.87% [|||..................]
> > MBUF_POOL                      1,710          8,689
> > 16.44% [||||.................]
> > MBUF_POOL                      1,787          8,612
> > 17.18% [||||.................]
> > MBUF_POOL                      1,579          8,820
> > 15.18% [||||.................]
> > MBUF_POOL                      1,780          8,619
> > 17.12% [||||.................]
> > MBUF_POOL                      1,679          8,720
> > 16.15% [||||.................]
> > MBUF_POOL                      1,604          8,795
> > 15.42% [||||.................]
> > MBUF_POOL                      1,761          8,638
> > 16.93% [||||.................]
> > MBUF_POOL                      1,773          8,626
> > 17.05% [||||.................]
> >
> > Test-3 -) (after patching)  run iperf3 udp testing that from Kernel to
> > eth device. looks stable.
> > After patching ,
> >
> > MBUF_POOL                      76             10,323
> > 0.73% [|....................]
> > MBUF_POOL                      193            10,206
> > 1.86% [|....................]
> > MBUF_POOL                      96             10,303
> > 0.92% [|....................]
> > MBUF_POOL                      269            10,130
> > 2.59% [|....................]
> > MBUF_POOL                      102            10,297
> > 0.98% [|....................]
> > MBUF_POOL                      235            10,164
> > 2.26% [|....................]
> > MBUF_POOL                      87             10,312
> > 0.84% [|....................]
> > MBUF_POOL                      293            10,106
> > 2.82% [|....................]
> > MBUF_POOL                      99             10,300
> > 0.95% [|....................]
> > MBUF_POOL                      296            10,103
> > 2.85% [|....................]
> > MBUF_POOL                      90             10,309
> > 0.87% [|....................]
> > MBUF_POOL                      299            10,100
> > 2.88% [|....................]
> > MBUF_POOL                      86             10,313
> > 0.83% [|....................]
> > MBUF_POOL                      262            10,137
> > 2.52% [|....................]
> > MBUF_POOL                      81             10,318
> > 0.78% [|....................]
> > MBUF_POOL                      81             10,318
> > 0.78% [|....................]
> > MBUF_POOL                      87             10,312
> > 0.84% [|....................]
> > MBUF_POOL                      252            10,147
> > 2.42% [|....................]
> > MBUF_POOL                      97             10,302
> > 0.93% [|....................]
> > iperf is STOPPED to tx_fresh for 4 min
> > MBUF_POOL                      296            10,103
> > 2.85% [|....................]
> > MBUF_POOL                      95             10,304
> > 0.91% [|....................]
> > MBUF_POOL                      269            10,130
> > 2.59% [|....................]
> > MBUF_POOL                      302            10,097
> > 2.90% [|....................]
> > MBUF_POOL                      88             10,311
> > 0.85% [|....................]
> > MBUF_POOL                      305            10,094
> > 2.93% [|....................]
> > MBUF_POOL                      88             10,311
> > 0.85% [|....................]
> > MBUF_POOL                      290            10,109
> > 2.79% [|....................]
> > MBUF_POOL                      84             10,315
> > 0.81% [|....................]
> > MBUF_POOL                      85             10,314
> > 0.82% [|....................]
> > MBUF_POOL                      291            10,108
> > 2.80% [|....................]
> > MBUF_POOL                      303            10,096
> > 2.91% [|....................]
> > MBUF_POOL                      92             10,307
> > 0.88% [|....................]
> >
> >
> > Best regards.
> >
> >
> > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>, 18
> > May 2023 Per, 17:56 tarihinde şunu yazdı:
> >
> >     On 5/18/2023 9:14 AM, Yasin CANER wrote:
> >     > Hello Ferruh,
> >     >
> >     > Thanks for your kind response. Also thanks to Stephen.
> >     >
> >     > Even if 1 packet is consumed from the kernel , each time rx_kni
> >     > allocates another 32 units. After a while all mempool is used in
> >     alloc_q
> >     > from kni. there is not any room for it.
> >     >
> >
> >     What you described continues until 'alloc_q' is full, by default fifo
> >     length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in
> your
> >     mempool?
> >
> >     You can consider either increasing mempool size, or decreasing
> 'alloc_q'
> >     fifo length, but reducing fifo size may cause performance issues so
> you
> >     need to evaluate that option.
> >
> >     > Do you think my mistake is using one and common mempool usage both
> kni
> >     > and eth?
> >     >
> >
> >     Using same mempool for both is fine.
> >
> >     > If it needs a separate mempool , i'd like to note in docs.
> >     >
> >     > Best regards.
> >     >
> >     > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>
> >     <mailto:ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>>, 17
> >     > May 2023 Çar, 20:53 tarihinde şunu yazdı:
> >     >
> >     >     On 5/9/2023 12:13 PM, Yasin CANER wrote:
> >     >     > Hello,
> >     >     >
> >     >     > I draw a flow via asciiflow to explain myself better.
> >     Problem is after
> >     >     > transmitting packets(mbufs) , it never puts in the
> >     kni->free_q to back
> >     >     > to the original pool. Each cycle, it allocates another 32
> >     units that
> >     >     > cause leaks. Or I am missing something.
> >     >     >
> >     >     > I already tried the rte_eth_tx_done_cleanup() function but it
> >     >     didn't fix
> >     >     > anything.
> >     >     >
> >     >     > I am working on a patch to fix this issue but I am not sure
> >     if there
> >     >     > is another way.
> >     >     >
> >     >     > Best regards.
> >     >     >
> >     >     > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> >     >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>
> >     >     > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> >     >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>>
> >     >     >
> >     >     >
> >     >     > unsigned
> >     >     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf
> **mbufs,
> >     >     unsigned
> >     >     > int num)
> >     >     > {
> >     >     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs,
> num);
> >     >     >
> >     >     > /* If buffers removed, allocate mbufs and then put them into
> >     >     alloc_q */
> >     >     > /* Question, how to test buffers is removed or not?*/
> >     >     > if (ret)
> >     >     >     kni_allocate_mbufs(kni);
> >     >     >
> >     >     > return ret;
> >     >     > }
> >     >     >
> >     >
> >     >     Selam Yasin,
> >     >
> >     >
> >     >     You can expect 'kni->alloc_q' fifo to be full, this is not a
> >     memory
> >     >     leak.
> >     >
> >     >     As you pointed out, number of mbufs consumed by kernel from
> >     'alloc_q'
> >     >     and number of mbufs added to 'alloc_q' is not equal and this is
> >     >     expected.
> >     >
> >     >     Target here is to prevent buffer underflow from kernel
> >     perspective, so
> >     >     it will always have available mbufs for new packets.
> >     >     That is why new mbufs are added to 'alloc_q' at worst same or
> >     sometimes
> >     >     higher rate than it is consumed.
> >     >
> >     >     You should calculate your mbuf requirement with the assumption
> >     that
> >     >     'kni->alloc_q' will be full of mbufs.
> >     >
> >     >
> >     >     'kni->alloc_q' is freed when kni is removed.
> >     >     Since 'alloc_q' holds physical address of the mbufs, it is a
> >     little
> >     >     challenging to free them in the userspace, that is why first
> >     kernel
> >     >     tries to move mbufs to 'kni->free_q' fifo, please check
> >     >     'kni_net_release_fifo_phy()' for it.
> >     >
> >     >     If all moved to 'free_q' fifo, nothing left to in 'alloc_q',
> >     but if not,
> >     >     userspace frees 'alloc_q' in 'rte_kni_release()', with
> >     following call:
> >     >     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
> >     >
> >     >
> >     >     I can see you have submitted fixes for this issue, although as
> I
> >     >     explained above I don't think a defect exist, I will review
> them
> >     >     today/tomorrow.
> >     >
> >     >     Regards,
> >     >     Ferruh
> >     >
> >     >
> >     >     > Stephen Hemminger <stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>
> >     >     <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>>
> >     >     > <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>
> >     >     <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>>>>, 8 May 2023 Pzt, 19:18
> tarihinde
> >     >     > şunu yazdı:
> >     >     >
> >     >     >     On Mon, 8 May 2023 09:01:41 +0300
> >     >     >     Yasin CANER <yasinncaner@gmail.com
> >     <mailto:yasinncaner@gmail.com>
> >     >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>>
> >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>
> >     >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com
> >>>>
> >     >     >     wrote:
> >     >     >
> >     >     >     > Hello Stephen,
> >     >     >     >
> >     >     >     > Thank you for response, it helps me a lot. I
> >     understand problem
> >     >     >     better.
> >     >     >     >
> >     >     >     > After reading mbuf library (
> >     >     >     >
> >     https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
> >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>
> >     >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
> >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>>)  i
> >     >     >     realized that
> >     >     >     > 31 units allocation memory slot doesn't return to pool!
> >     >     >
> >     >     >     If receive burst returns 1 mbuf, the other 31 pointers
> >     in the
> >     >     array
> >     >     >     are not valid. They do not point to mbufs.
> >     >     >
> >     >     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it
> >     can back
> >     >     to pool.
> >     >     >     >
> >     >     >     > Main problem is that allocation doesn't return to
> >     original pool,
> >     >     >     act as
> >     >     >     > used. So, after following rte_pktmbuf_free
> >     >     >     >
> >     >     >
> >     >
> >       <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> >>>>
> >     >     >     > function,
> >     >     >     > i realized that there is 2 function to helps to mbufs
> back
> >     >     to pool.
> >     >     >     >
> >     >     >     > These are rte_mbuf_raw_free
> >     >     >     >
> >     >     >
> >     >
> >       <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> >>>>
> >     >     >     >  and rte_pktmbuf_free_seg
> >     >     >     >
> >     >     >
> >     >
> >       <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> >>>>.
> >     >     >     > I will focus on them.
> >     >     >     >
> >     >     >     > If there is another suggestion, I will be very pleased.
> >     >     >     >
> >     >     >     > Best regards.
> >     >     >     >
> >     >     >     > Yasin CANER
> >     >     >     > Ulak
> >     >     >
> >     >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 35469 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-04 13:00 ` Yasin CANER
@ 2023-05-04 16:14   ` Stephen Hemminger
  0 siblings, 0 replies; 13+ messages in thread
From: Stephen Hemminger @ 2023-05-04 16:14 UTC (permalink / raw)
  To: Yasin CANER; +Cc: users

On Thu, 4 May 2023 13:00:32 +0000
Yasin CANER <yasin.caner@ulakhaberlesme.com.tr> wrote:

> In default-testing kni application works as below
> 
> 
>   1.  Call rte_kni_rx_burst function to get messages
>   2.  Then push to other KNI interface via rte_kni_tx_burst. There is no memory-leak because  kni_free_mbufs is called and freed unused allocations.
> 
> On the other hand, in my scenario
> 
> 
>   1.  Call rte_kni_rx_burst func to get messages, burst_size is 32 but 1 packet is received from Kernel
>   2.  Then try to free all messages via rte_pktmbuf_free
>   3.  Freed 1 unit and 31 unit is not freed. memory leak
> 
> Other scenario,
> 
> 
>   1.  Call rte_kni_rx_burst func to  get messages, burst_size is 32 but 1 packet is received from Kernel
>   2.  Push to ethernet_device via rte_eth_tx_burst
>   3.  There is not any free operation by rte_eth_tx_burst
>   4.  Try to free via rte_pktmbuf_free
>   5.  1 unit is freed 31 unit is left in memory. Still memory leak


It looks like you are confused about the lifetime of mbufs, and the "ownership" of the mbuf.

When you do kni_rx_burst, one mbuf is full of data and returned. The other 31 slots are not used.
Only the first mbuf is valid.

When mbuf is passed to another DPDK device driver for transmit. The mbuf is then owned by the
device. This mbuf can not be freed until the device has completed DMA and is transmitting it.
Also, many devices defer freeing transmit mbuf's as an optimization. There is some limited control
over the transmit freeing via tx_free_thresh. See the DPDK programmers guide for more info:
https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: DPDK 22.11 - How to fix memory leak for KNI - How to debug
  2023-05-04  7:32 Yasin CANER
@ 2023-05-04 13:00 ` Yasin CANER
  2023-05-04 16:14   ` Stephen Hemminger
  0 siblings, 1 reply; 13+ messages in thread
From: Yasin CANER @ 2023-05-04 13:00 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 7053 bytes --]

Hello all,

I got issue, may be there is a missing solution in rte_kni.c or other parts.

There is not any function to free kni allocated bufs and rte_pktmbuf_free is not enough to handle it. (kni_free_mbufs is not reachable.)

In default-testing kni application works as below


  1.  Call rte_kni_rx_burst function to get messages
  2.  Then push to other KNI interface via rte_kni_tx_burst. There is no memory-leak because  kni_free_mbufs is called and freed unused allocations.

On the other hand, in my scenario


  1.  Call rte_kni_rx_burst func to get messages, burst_size is 32 but 1 packet is received from Kernel
  2.  Then try to free all messages via rte_pktmbuf_free
  3.  Freed 1 unit and 31 unit is not freed. memory leak

Other scenario,


  1.  Call rte_kni_rx_burst func to  get messages, burst_size is 32 but 1 packet is received from Kernel
  2.  Push to ethernet_device via rte_eth_tx_burst
  3.  There is not any free operation by rte_eth_tx_burst
  4.  Try to free via rte_pktmbuf_free
  5.  1 unit is freed 31 unit is left in memory. Still memory leak


What i am missing ? I think same issue happens in version 17.11 .

unsigned
rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
{
              num = RTE_MIN(kni_fifo_free_count(kni->rx_q), num);
              void *phy_mbufs[num];
              unsigned int ret;
              unsigned int i;

              for (i = 0; i < num; i++)
                            phy_mbufs[i] = va2pa_all(mbufs[i]);

              ret = kni_fifo_put(kni->rx_q, phy_mbufs, num);

              /* Get mbufs from free_q and then free them */
              kni_free_mbufs(kni);  <-------- Here is freeing unused allocations.

              return ret;
}
unsigned
rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
{
              unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);

              /* If buffers removed, allocate mbufs and then put them into alloc_q */
              if (ret)
                            kni_allocate_mbufs(kni);

              return ret;
}

static void
kni_free_mbufs(struct rte_kni *kni)
{
              int i, ret;
              struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];

              ret = kni_fifo_get(kni->free_q, (void **)pkts, MAX_MBUF_BURST_NUM);   <<<--- to free all allocated memory needs to use this function. (kni->free_q)
              if (likely(ret > 0)) {
                            for (i = 0; i < ret; i++)
                                          rte_pktmbuf_free(pkts[i]);
              }
}


Best Regards.

___
Yasin CANER
Lider Mühendis
Ulak Haberleşme A.Ş. Ankara

From: Yasin CANER
Sent: Thursday, May 4, 2023 10:32 AM
To: users@dpdk.org
Subject: DPDK 22.11 - How to fix memory leak for KNI - How to debug

Hello all,

I think there is a memory leak for KNI.

Firstly, try to active trace modüle to follow memory management but could not. İt doesn't create file and dont have any clue.

Run Command : -c dpdk_core_mask -d librte_net_virtio.so -d librte_mbuf.so -d librte_mempool.so -d librte_mempool_ring.so -d librte_mempool_stack.so -d librte_mempool_bucket.so -d librte_kni.so --log-level lib.kni:debug --log-level lib.eal:debug --log-level lib.ethdev:debug --trace=kni --trace-dir=/tmp/


Secondly, I used followed functions.

``code
  used_mpool  = rte_mempool_in_use_count(rdpdk_cfg->mbuf_pool);
  count_mpool = rte_mempool_avail_count(rdpdk_cfg->mbuf_pool);
``

After calling function rte_kni_rx_burst, 32 unit is allocated. Then i force to free message_buf (mbuf). It frees 1 unit. 31 units left in memory!


How to fix or understand this issue.
Follow logs,




  1.  (59383)  6:55:10    lrtc():3212> [KNI]Picked up 1 packets from port 0 [KNI:F000]   --> 1 packet is received from Kernel that is allocate 32 unit
  2.  (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:468][avail_mpool:9931]
  3.  (59383)  6:55:10    pkni():2536> [KNI] i:0/1
  4.  (59383)  6:55:10    pkni():2616> [KNI][EGR]  P:[IPv6]  P:[0] [fe80::f816:3eff:fe93:f5fd]->[ff02::2] --> Packet is a broadcast packet IPv6
  5.  (59383)  6:55:10    pkni():2620> [KNI][EGR][pkt-len:70]
  6.  (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:467][avail_mpool:9932] --> mbuf is freed to understand mem-leak. İt is same happens after calling rte_eth_tx_burst
  7.  (59383)  6:55:10    lrtc():3212> [KNI]Picked up 1 packets from port 1 [KNI:F000] --> 1 packet is received from Kernel that is allocate 32 unit 9932 to 9900 then same process happens and 31 units is not freed.
  8.  (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:499][avail_mpool:9900]
  9.  (59383)  6:55:10    pkni():2536> [KNI] i:0/1
  10. (59383)  6:55:10    pkni():2616> [KNI][EGR]  P:[IPv6]    P:[1] [fe80::f816:3eff:fed2:9101]->[ff02::2]
  11. (59383)  6:55:10    pkni():2620> [KNI][EGR][pkt-len:70]
  12. (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:498][avail_mpool:9901]


Kernel : 5.4.0-146-generic #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Ubuntu 20.04
DPDK dpdk-stable-22.11.1
İgb_uio is used.

Best regards.
___
Yasin CANER
Lider Mühendis
Ulak Haberleşme A.Ş. Ankara


Bu elektronik posta ve onunla iletilen bütün dosyalar sadece göndericisi tarafından alması amaçlanan yetkili, gerçek ya da tüzel kişinin kullanımı içindir. Eğer söz konusu yetkili alıcı değilseniz, bu elektronik postanın içeriğini açıklamanız, kopyalamanız, yönlendirmeniz ve kullanmanız kesinlikle yasaktır ve bu elektronik postayı derhal silmeniz gerekmektedir. Şirketimiz bu mesajın içerdiği bilgilerin doğruluğu veya eksiksiz olduğu konusunda herhangi bir garanti vermemektedir. Bu nedenle, bu bilgilerin ne şekilde olursa olsun içeriğinden, iletilmesinden, alınmasından ve saklanmasından sorumlu değildir. Bu mesajdaki görüşler yalnızca gönderen kişiye aittir ve Şirketimizin görüşlerini yansıtmayabilir. Tarafınız ile paylaşılan kişisel verilerin, 6698 sayılı Kişisel Verilerin Korunması Kanununa uygun olarak işlenmesi gereğini bilginize sunarız.

________________________________

This e-mail and all files sent with it are intended for authorized natural or legal persons, who should be the only persons to open and read them. If you are not an authorized recipient, you are strictly prohibited from disclosing, copying, forwarding, and using the contents of this e-mail, and you must immediately delete it. Our company does not guarantee the accuracy or thoroughness of the information contained in this message. It is therefore in no way responsible for the content, sending, retrieval and storage of this information. The opinions contained in this message are the views of the sender only and do not necessarily reflect the views of the company. We would like to inform you that any personal data shared with you should be processed in accordance with the Law on Protection of Personal Data numbered 6698.

[-- Attachment #2: Type: text/html, Size: 26943 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* DPDK 22.11 - How to fix memory leak for KNI - How to debug
@ 2023-05-04  7:32 Yasin CANER
  2023-05-04 13:00 ` Yasin CANER
  0 siblings, 1 reply; 13+ messages in thread
From: Yasin CANER @ 2023-05-04  7:32 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 4167 bytes --]

Hello all,

I think there is a memory leak for KNI.

Firstly, try to active trace modüle to follow memory management but could not. İt doesn't create file and dont have any clue.

Run Command : -c dpdk_core_mask -d librte_net_virtio.so -d librte_mbuf.so -d librte_mempool.so -d librte_mempool_ring.so -d librte_mempool_stack.so -d librte_mempool_bucket.so -d librte_kni.so --log-level lib.kni:debug --log-level lib.eal:debug --log-level lib.ethdev:debug --trace=kni --trace-dir=/tmp/


Secondly, I used followed functions.

``code
  used_mpool  = rte_mempool_in_use_count(rdpdk_cfg->mbuf_pool);
  count_mpool = rte_mempool_avail_count(rdpdk_cfg->mbuf_pool);
``

After calling function rte_kni_rx_burst, 32 unit is allocated. Then i force to free message_buf (mbuf). It frees 1 unit. 31 units left in memory!


How to fix or understand this issue.
Follow logs,




  1.  (59383)  6:55:10    lrtc():3212> [KNI]Picked up 1 packets from port 0 [KNI:F000]   --> 1 packet is received from Kernel that is allocate 32 unit
  2.  (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:468][avail_mpool:9931]
  3.  (59383)  6:55:10    pkni():2536> [KNI] i:0/1
  4.  (59383)  6:55:10    pkni():2616> [KNI][EGR]  P:[IPv6]  P:[0] [fe80::f816:3eff:fe93:f5fd]->[ff02::2] --> Packet is a broadcast packet IPv6
  5.  (59383)  6:55:10    pkni():2620> [KNI][EGR][pkt-len:70]
  6.  (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:467][avail_mpool:9932] --> mbuf is freed to understand mem-leak. İt is same happens after calling rte_eth_tx_burst
  7.  (59383)  6:55:10    lrtc():3212> [KNI]Picked up 1 packets from port 1 [KNI:F000] --> 1 packet is received from Kernel that is allocate 32 unit 9932 to 9900 then same process happens and 31 units is not freed.
  8.  (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:499][avail_mpool:9900]
  9.  (59383)  6:55:10    pkni():2536> [KNI] i:0/1
  10. (59383)  6:55:10    pkni():2616> [KNI][EGR]  P:[IPv6]    P:[1] [fe80::f816:3eff:fed2:9101]->[ff02::2]
  11. (59383)  6:55:10    pkni():2620> [KNI][EGR][pkt-len:70]
  12. (59383)  6:55:10   print_mempool_tx():2511> [UseCount_mpool:498][avail_mpool:9901]


Kernel : 5.4.0-146-generic #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Ubuntu 20.04
DPDK dpdk-stable-22.11.1
İgb_uio is used.

Best regards.
___
Yasin CANER
Lider Mühendis
Ulak Haberleşme A.Ş. Ankara


Bu elektronik posta ve onunla iletilen bütün dosyalar sadece göndericisi tarafından alması amaçlanan yetkili, gerçek ya da tüzel kişinin kullanımı içindir. Eğer söz konusu yetkili alıcı değilseniz, bu elektronik postanın içeriğini açıklamanız, kopyalamanız, yönlendirmeniz ve kullanmanız kesinlikle yasaktır ve bu elektronik postayı derhal silmeniz gerekmektedir. Şirketimiz bu mesajın içerdiği bilgilerin doğruluğu veya eksiksiz olduğu konusunda herhangi bir garanti vermemektedir. Bu nedenle, bu bilgilerin ne şekilde olursa olsun içeriğinden, iletilmesinden, alınmasından ve saklanmasından sorumlu değildir. Bu mesajdaki görüşler yalnızca gönderen kişiye aittir ve Şirketimizin görüşlerini yansıtmayabilir. Tarafınız ile paylaşılan kişisel verilerin, 6698 sayılı Kişisel Verilerin Korunması Kanununa uygun olarak işlenmesi gereğini bilginize sunarız.

________________________________

This e-mail and all files sent with it are intended for authorized natural or legal persons, who should be the only persons to open and read them. If you are not an authorized recipient, you are strictly prohibited from disclosing, copying, forwarding, and using the contents of this e-mail, and you must immediately delete it. Our company does not guarantee the accuracy or thoroughness of the information contained in this message. It is therefore in no way responsible for the content, sending, retrieval and storage of this information. The opinions contained in this message are the views of the sender only and do not necessarily reflect the views of the company. We would like to inform you that any personal data shared with you should be processed in accordance with the Law on Protection of Personal Data numbered 6698.

[-- Attachment #2: Type: text/html, Size: 13339 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-05-29  6:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-08  6:01 DPDK 22.11 - How to fix memory leak for KNI - How to debug Yasin CANER
2023-05-08 16:18 ` Stephen Hemminger
2023-05-09 11:13   ` Yasin CANER
2023-05-11 14:14     ` Yasin CANER
2023-05-17 17:53     ` Ferruh Yigit
2023-05-18  8:14       ` Yasin CANER
2023-05-18 14:56         ` Ferruh Yigit
2023-05-19 17:47           ` Yasin CANER
2023-05-19 18:43             ` Ferruh Yigit
2023-05-29  6:33               ` Yasin CANER
  -- strict thread matches above, loose matches on Subject: below --
2023-05-04  7:32 Yasin CANER
2023-05-04 13:00 ` Yasin CANER
2023-05-04 16:14   ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).