DPDK usage discussions
 help / color / mirror / Atom feed
From: Yasin CANER <yasinncaner@gmail.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>, users@dpdk.org
Subject: Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug
Date: Mon, 29 May 2023 09:33:49 +0300	[thread overview]
Message-ID: <CAP5epcPGCNvx4T=_KJ0K1WbPtEWUr93Zm6fMxL+jnwYy247oDw@mail.gmail.com> (raw)
In-Reply-To: <28c13351-994f-1898-8227-6d6875ed4812@amd.com>

[-- Attachment #1: Type: text/plain, Size: 20904 bytes --]

Hello all,

I never stop testing to see results.  It has been 10 days. After patching,
no leak.

MBUF_POOL                      82             10,317
0.79% [|....................]
MBUF_POOL                      83             10,316
0.80% [|....................]
MBUF_POOL                      93             10,306
0.89% [|....................]

Sometimes, it takes time to get back to mempool. In my opinion, it is about
the OVS-DPDK/openstack environment issue.  If I have a chance, try to run
an Intel Bare-metal environment.

After meeting with Ferruh, he explained concerns about performance issues
so I decided to continue manual patching for my application.

It is removed from bugzilla.

For your information.
Best regards.

Ferruh Yigit <ferruh.yigit@amd.com>, 19 May 2023 Cum, 21:43 tarihinde şunu
yazdı:

> On 5/19/2023 6:47 PM, Yasin CANER wrote:
> > Hello,
> >
>
> Hi,
>
> Can you please bottom-post, combination of both makes discussion very
> hard to follow?
>
> > I tested all day both before and after patching.
> >
> > I could not understand that it is a memory leak or not. Maybe it needs
> > optimization. You lead, I follow.
> >
> > 1-) You are right, alloc_q is never bigger than 1024.  But it always
> > allocates 32 units then more than 1024 are being freed. Maybe it takes
> > time, I don't know.
> >
>
> At least alloc_q is only freed on kni release, so mbufs in that fifo can
> sit there as long as application is running.
>
> > 2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories
> > are back to mempool (most of them). (driver virtio and eth-devices are
> > binded via igb_uio) . It really takes time. So it is better to increase
> > the size of the mempool.
> > (https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html
> > <https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html>)
> >
> > 3-) try to list mempool state in randomly
> >
>
> It looks number of mbufs used seems increasing, but in worst case both
> alloc_q and free_q can be full, which makes 2048 mbufs, and in below
> tests used mbufs number is not bigger than this value, so looks OK.
> If you run your test for a longer duration, do you observe that used
> mbufs going much above this number?
>
> Also what are the 'num' parameter to 'rte_kni_tx_burst()' API?
> If it is bigger than 'MAX_MBUF_BURST_NUM', that may lead mbufs
> accumulate at free_q fifo.
>
>
> As experiment, it is possible to decrease KNI fifo sizes, and observe
> the result.
>
>
> > Test -1 -) (old code) ICMP testing. The whole mempool size is about
> > 10350. So after FIFO reaches max-size -1024, %10 of the size of the
> > mempool is in use. But little by little memory is waiting in use and
> > doesn't go back to the pool. I could not find the reason.
> >
> > MBUF_POOL                      448            9,951
> >  4.31% [|....................]
> > MBUF_POOL                      1,947          8,452
> > 18.72% [||||.................]
> > MBUF_POOL                      1,803          8,596
> > 17.34% [||||.................]
> > MBUF_POOL                      1,941          8,458
> > 18.67% [||||.................]
> > MBUF_POOL                      1,900          8,499
> > 18.27% [||||.................]
> > MBUF_POOL                      1,999          8,400
> > 19.22% [||||.................]
> > MBUF_POOL                      1,724          8,675
> > 16.58% [||||.................]
> > MBUF_POOL                      1,811          8,588
> > 17.42% [||||.................]
> > MBUF_POOL                      1,978          8,421
> > 19.02% [||||.................]
> > MBUF_POOL                      2,008          8,391
> > 19.31% [||||.................]
> > MBUF_POOL                      1,854          8,545
> > 17.83% [||||.................]
> > MBUF_POOL                      1,922          8,477
> > 18.48% [||||.................]
> > MBUF_POOL                      1,892          8,507
> > 18.19% [||||.................]
> > MBUF_POOL                      1,957          8,442
> > 18.82% [||||.................]
> >
> > Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth
> > device. Waited to see what happens in 4 min. memory doesn't go back to
> > the mempool. little by little, memory usage increases.
> >
> > MBUF_POOL                      512            9,887
> >  4.92% [|....................]
> > MBUF_POOL                      1,411          8,988
> > 13.57% [|||..................]
> > MBUF_POOL                      1,390          9,009
> > 13.37% [|||..................]
> > MBUF_POOL                      1,558          8,841
> > 14.98% [|||..................]
> > MBUF_POOL                      1,453          8,946
> > 13.97% [|||..................]
> > MBUF_POOL                      1,525          8,874
> > 14.66% [|||..................]
> > MBUF_POOL                      1,592          8,807
> > 15.31% [||||.................]
> > MBUF_POOL                      1,639          8,760
> > 15.76% [||||.................]
> > MBUF_POOL                      1,624          8,775
> > 15.62% [||||.................]
> > MBUF_POOL                      1,618          8,781
> > 15.56% [||||.................]
> > MBUF_POOL                      1,708          8,691
> > 16.42% [||||.................]
> > iperf is STOPPED to tx_fresh for 4 min
> > MBUF_POOL                      1,709          8,690
> > 16.43% [||||.................]
> > iperf is STOPPED to tx_fresh for 4 min
> > MBUF_POOL                      1,709          8,690
> > 16.43% [||||.................]
> > MBUF_POOL                      1,683          8,716
> > 16.18% [||||.................]
> > MBUF_POOL                      1,563          8,836
> > 15.03% [||||.................]
> > MBUF_POOL                      1,726          8,673
> > 16.60% [||||.................]
> > MBUF_POOL                      1,589          8,810
> > 15.28% [||||.................]
> > MBUF_POOL                      1,556          8,843
> > 14.96% [|||..................]
> > MBUF_POOL                      1,610          8,789
> > 15.48% [||||.................]
> > MBUF_POOL                      1,616          8,783
> > 15.54% [||||.................]
> > MBUF_POOL                      1,709          8,690
> > 16.43% [||||.................]
> > MBUF_POOL                      1,740          8,659
> > 16.73% [||||.................]
> > MBUF_POOL                      1,546          8,853
> > 14.87% [|||..................]
> > MBUF_POOL                      1,710          8,689
> > 16.44% [||||.................]
> > MBUF_POOL                      1,787          8,612
> > 17.18% [||||.................]
> > MBUF_POOL                      1,579          8,820
> > 15.18% [||||.................]
> > MBUF_POOL                      1,780          8,619
> > 17.12% [||||.................]
> > MBUF_POOL                      1,679          8,720
> > 16.15% [||||.................]
> > MBUF_POOL                      1,604          8,795
> > 15.42% [||||.................]
> > MBUF_POOL                      1,761          8,638
> > 16.93% [||||.................]
> > MBUF_POOL                      1,773          8,626
> > 17.05% [||||.................]
> >
> > Test-3 -) (after patching)  run iperf3 udp testing that from Kernel to
> > eth device. looks stable.
> > After patching ,
> >
> > MBUF_POOL                      76             10,323
> > 0.73% [|....................]
> > MBUF_POOL                      193            10,206
> > 1.86% [|....................]
> > MBUF_POOL                      96             10,303
> > 0.92% [|....................]
> > MBUF_POOL                      269            10,130
> > 2.59% [|....................]
> > MBUF_POOL                      102            10,297
> > 0.98% [|....................]
> > MBUF_POOL                      235            10,164
> > 2.26% [|....................]
> > MBUF_POOL                      87             10,312
> > 0.84% [|....................]
> > MBUF_POOL                      293            10,106
> > 2.82% [|....................]
> > MBUF_POOL                      99             10,300
> > 0.95% [|....................]
> > MBUF_POOL                      296            10,103
> > 2.85% [|....................]
> > MBUF_POOL                      90             10,309
> > 0.87% [|....................]
> > MBUF_POOL                      299            10,100
> > 2.88% [|....................]
> > MBUF_POOL                      86             10,313
> > 0.83% [|....................]
> > MBUF_POOL                      262            10,137
> > 2.52% [|....................]
> > MBUF_POOL                      81             10,318
> > 0.78% [|....................]
> > MBUF_POOL                      81             10,318
> > 0.78% [|....................]
> > MBUF_POOL                      87             10,312
> > 0.84% [|....................]
> > MBUF_POOL                      252            10,147
> > 2.42% [|....................]
> > MBUF_POOL                      97             10,302
> > 0.93% [|....................]
> > iperf is STOPPED to tx_fresh for 4 min
> > MBUF_POOL                      296            10,103
> > 2.85% [|....................]
> > MBUF_POOL                      95             10,304
> > 0.91% [|....................]
> > MBUF_POOL                      269            10,130
> > 2.59% [|....................]
> > MBUF_POOL                      302            10,097
> > 2.90% [|....................]
> > MBUF_POOL                      88             10,311
> > 0.85% [|....................]
> > MBUF_POOL                      305            10,094
> > 2.93% [|....................]
> > MBUF_POOL                      88             10,311
> > 0.85% [|....................]
> > MBUF_POOL                      290            10,109
> > 2.79% [|....................]
> > MBUF_POOL                      84             10,315
> > 0.81% [|....................]
> > MBUF_POOL                      85             10,314
> > 0.82% [|....................]
> > MBUF_POOL                      291            10,108
> > 2.80% [|....................]
> > MBUF_POOL                      303            10,096
> > 2.91% [|....................]
> > MBUF_POOL                      92             10,307
> > 0.88% [|....................]
> >
> >
> > Best regards.
> >
> >
> > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>, 18
> > May 2023 Per, 17:56 tarihinde şunu yazdı:
> >
> >     On 5/18/2023 9:14 AM, Yasin CANER wrote:
> >     > Hello Ferruh,
> >     >
> >     > Thanks for your kind response. Also thanks to Stephen.
> >     >
> >     > Even if 1 packet is consumed from the kernel , each time rx_kni
> >     > allocates another 32 units. After a while all mempool is used in
> >     alloc_q
> >     > from kni. there is not any room for it.
> >     >
> >
> >     What you described continues until 'alloc_q' is full, by default fifo
> >     length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in
> your
> >     mempool?
> >
> >     You can consider either increasing mempool size, or decreasing
> 'alloc_q'
> >     fifo length, but reducing fifo size may cause performance issues so
> you
> >     need to evaluate that option.
> >
> >     > Do you think my mistake is using one and common mempool usage both
> kni
> >     > and eth?
> >     >
> >
> >     Using same mempool for both is fine.
> >
> >     > If it needs a separate mempool , i'd like to note in docs.
> >     >
> >     > Best regards.
> >     >
> >     > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>
> >     <mailto:ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>>, 17
> >     > May 2023 Çar, 20:53 tarihinde şunu yazdı:
> >     >
> >     >     On 5/9/2023 12:13 PM, Yasin CANER wrote:
> >     >     > Hello,
> >     >     >
> >     >     > I draw a flow via asciiflow to explain myself better.
> >     Problem is after
> >     >     > transmitting packets(mbufs) , it never puts in the
> >     kni->free_q to back
> >     >     > to the original pool. Each cycle, it allocates another 32
> >     units that
> >     >     > cause leaks. Or I am missing something.
> >     >     >
> >     >     > I already tried the rte_eth_tx_done_cleanup() function but it
> >     >     didn't fix
> >     >     > anything.
> >     >     >
> >     >     > I am working on a patch to fix this issue but I am not sure
> >     if there
> >     >     > is another way.
> >     >     >
> >     >     > Best regards.
> >     >     >
> >     >     > https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> >     >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>
> >     >     > <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>
> >     >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/
> >     <https://pastebin.ubuntu.com/p/s4h5psqtgZ/>>>
> >     >     >
> >     >     >
> >     >     > unsigned
> >     >     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf
> **mbufs,
> >     >     unsigned
> >     >     > int num)
> >     >     > {
> >     >     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs,
> num);
> >     >     >
> >     >     > /* If buffers removed, allocate mbufs and then put them into
> >     >     alloc_q */
> >     >     > /* Question, how to test buffers is removed or not?*/
> >     >     > if (ret)
> >     >     >     kni_allocate_mbufs(kni);
> >     >     >
> >     >     > return ret;
> >     >     > }
> >     >     >
> >     >
> >     >     Selam Yasin,
> >     >
> >     >
> >     >     You can expect 'kni->alloc_q' fifo to be full, this is not a
> >     memory
> >     >     leak.
> >     >
> >     >     As you pointed out, number of mbufs consumed by kernel from
> >     'alloc_q'
> >     >     and number of mbufs added to 'alloc_q' is not equal and this is
> >     >     expected.
> >     >
> >     >     Target here is to prevent buffer underflow from kernel
> >     perspective, so
> >     >     it will always have available mbufs for new packets.
> >     >     That is why new mbufs are added to 'alloc_q' at worst same or
> >     sometimes
> >     >     higher rate than it is consumed.
> >     >
> >     >     You should calculate your mbuf requirement with the assumption
> >     that
> >     >     'kni->alloc_q' will be full of mbufs.
> >     >
> >     >
> >     >     'kni->alloc_q' is freed when kni is removed.
> >     >     Since 'alloc_q' holds physical address of the mbufs, it is a
> >     little
> >     >     challenging to free them in the userspace, that is why first
> >     kernel
> >     >     tries to move mbufs to 'kni->free_q' fifo, please check
> >     >     'kni_net_release_fifo_phy()' for it.
> >     >
> >     >     If all moved to 'free_q' fifo, nothing left to in 'alloc_q',
> >     but if not,
> >     >     userspace frees 'alloc_q' in 'rte_kni_release()', with
> >     following call:
> >     >     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`
> >     >
> >     >
> >     >     I can see you have submitted fixes for this issue, although as
> I
> >     >     explained above I don't think a defect exist, I will review
> them
> >     >     today/tomorrow.
> >     >
> >     >     Regards,
> >     >     Ferruh
> >     >
> >     >
> >     >     > Stephen Hemminger <stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>
> >     >     <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>>
> >     >     > <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>
> >     >     <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>>>>, 8 May 2023 Pzt, 19:18
> tarihinde
> >     >     > şunu yazdı:
> >     >     >
> >     >     >     On Mon, 8 May 2023 09:01:41 +0300
> >     >     >     Yasin CANER <yasinncaner@gmail.com
> >     <mailto:yasinncaner@gmail.com>
> >     >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>>
> >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com>
> >     >     <mailto:yasinncaner@gmail.com <mailto:yasinncaner@gmail.com
> >>>>
> >     >     >     wrote:
> >     >     >
> >     >     >     > Hello Stephen,
> >     >     >     >
> >     >     >     > Thank you for response, it helps me a lot. I
> >     understand problem
> >     >     >     better.
> >     >     >     >
> >     >     >     > After reading mbuf library (
> >     >     >     >
> >     https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
> >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>
> >     >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>
> >     >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html
> >     <https://doc.dpdk.org/guides/prog_guide/mempool_lib.html>>>)  i
> >     >     >     realized that
> >     >     >     > 31 units allocation memory slot doesn't return to pool!
> >     >     >
> >     >     >     If receive burst returns 1 mbuf, the other 31 pointers
> >     in the
> >     >     array
> >     >     >     are not valid. They do not point to mbufs.
> >     >     >
> >     >     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it
> >     can back
> >     >     to pool.
> >     >     >     >
> >     >     >     > Main problem is that allocation doesn't return to
> >     original pool,
> >     >     >     act as
> >     >     >     > used. So, after following rte_pktmbuf_free
> >     >     >     >
> >     >     >
> >     >
> >       <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902
> >>>>
> >     >     >     > function,
> >     >     >     > i realized that there is 2 function to helps to mbufs
> back
> >     >     to pool.
> >     >     >     >
> >     >     >     > These are rte_mbuf_raw_free
> >     >     >     >
> >     >     >
> >     >
> >       <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432
> >>>>
> >     >     >     >  and rte_pktmbuf_free_seg
> >     >     >     >
> >     >     >
> >     >
> >       <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37>
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> <
> http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37
> >>>>.
> >     >     >     > I will focus on them.
> >     >     >     >
> >     >     >     > If there is another suggestion, I will be very pleased.
> >     >     >     >
> >     >     >     > Best regards.
> >     >     >     >
> >     >     >     > Yasin CANER
> >     >     >     > Ulak
> >     >     >
> >     >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 35469 bytes --]

  reply	other threads:[~2023-05-29  6:34 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-08  6:01 Yasin CANER
2023-05-08 16:18 ` Stephen Hemminger
2023-05-09 11:13   ` Yasin CANER
2023-05-11 14:14     ` Yasin CANER
2023-05-17 17:53     ` Ferruh Yigit
2023-05-18  8:14       ` Yasin CANER
2023-05-18 14:56         ` Ferruh Yigit
2023-05-19 17:47           ` Yasin CANER
2023-05-19 18:43             ` Ferruh Yigit
2023-05-29  6:33               ` Yasin CANER [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-05-04  7:32 Yasin CANER
2023-05-04 13:00 ` Yasin CANER
2023-05-04 16:14   ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP5epcPGCNvx4T=_KJ0K1WbPtEWUr93Zm6fMxL+jnwYy247oDw@mail.gmail.com' \
    --to=yasinncaner@gmail.com \
    --cc=ferruh.yigit@amd.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).