From: David Marchand <david.marchand@redhat.com>
To: Ke Zhang <ke1x.zhang@intel.com>
Cc: Xiaoyun Li <xiaoyun.li@intel.com>,
"Singh, Aman Deep" <aman.deep.singh@intel.com>,
Yuying Zhang <yuying.zhang@intel.com>, dev <dev@dpdk.org>,
"Yigit, Ferruh" <ferruh.yigit@intel.com>,
Thomas Monjalon <thomas@monjalon.net>
Subject: Re: [PATCH v2] app/testpmd: fix issue with memory leaks when quit testpmd
Date: Mon, 14 Mar 2022 10:10:25 +0100 [thread overview]
Message-ID: <CAJFAV8xZ5B3AtJLReYDtP9Q=XtZ_8yNWVW8kE5WRgOHcpupa9Q@mail.gmail.com> (raw)
In-Reply-To: <20220314055252.392004-1-ke1x.zhang@intel.com>
On Mon, Mar 14, 2022 at 6:59 AM Ke Zhang <ke1x.zhang@intel.com> wrote:
>
> when dpdk is compiled in ASan, there is a memory leaks after
> quit testpmd if set mcast_addr, this patch fix this issue.
The memory leak is present regardless of ASan being compiled in.
Plus, afaiu, the issue happens too when closing a port.
I'd rather rephrase like:
"""
A multicast address pool is allocated for a port when using mcast_addr
testpmd commands.
When closing a port or stopping testpmd, this pool was not freed,
resulting in a leak.
This issue has been caught using ASan.
Free this pool when closing the port.
"""
>
> Error info as following:
> ERROR: LeakSanitizer: detected memory leaksDirect leak of
> 192 byte(s)
> 0 0x7f6a2e0aeffe in __interceptor_realloc
> (/lib/x86_64-linux-gnu/libasan.so.5+0x10dffe)
> 1 0x565361eb340f in mcast_addr_pool_extend
> ../app/test-pmd/config.c:5162
> 2 0x565361eb3556 in mcast_addr_pool_append
> ../app/test-pmd/config.c:5180
> 3 0x565361eb3aae in mcast_addr_add
> ../app/test-pmd/config.c:5243
>
This patch needs a Fixes: tag.
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> ---
> app/test-pmd/testpmd.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index fe2ce19f99..f7e18aee25 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -3136,6 +3136,12 @@ close_port(portid_t pid)
> continue;
> }
>
> + if (port->mc_addr_nb != 0) {
> + /* free the pool of multicast addresses. */
> + free(port->mc_addr_pool);
> + port->mc_addr_pool = NULL;
> + }
> +
Nit: I would introduce a helper in app/test-pmd/config.c, for example,
mcast_addr_pool_destroy.
> if (is_proc_primary()) {
> port_flow_flush(pi);
> port_flex_item_flush(pi);
> --
> 2.25.1
>
--
David Marchand
next prev parent reply other threads:[~2022-03-14 9:10 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-01 2:06 [PATCH] " Ke Zhang
2022-03-04 16:43 ` Ferruh Yigit
2022-03-08 6:05 ` Zhang, Ke1X
2022-03-14 5:52 ` [PATCH v2] " Ke Zhang
2022-03-14 9:10 ` David Marchand [this message]
2022-03-15 10:06 ` Zhang, Yuying
2022-03-25 8:35 ` [PATCH v3] " Ke Zhang
2022-04-04 15:34 ` Zhang, Yuying
2022-06-08 12:06 ` Ferruh Yigit
2022-03-14 5:17 [PATCH v2] " Ke Zhang
2022-03-14 5:34 Ke Zhang
2022-03-14 5:47 Ke Zhang
2022-03-15 10:00 ` Zhang, Yuying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJFAV8xZ5B3AtJLReYDtP9Q=XtZ_8yNWVW8kE5WRgOHcpupa9Q@mail.gmail.com' \
--to=david.marchand@redhat.com \
--cc=aman.deep.singh@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=ke1x.zhang@intel.com \
--cc=thomas@monjalon.net \
--cc=xiaoyun.li@intel.com \
--cc=yuying.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).