DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zhang, Yuying" <yuying.zhang@intel.com>
To: "Zhang, Ke1X" <ke1x.zhang@intel.com>,
	"Li, Xiaoyun" <xiaoyun.li@intel.com>,
	"Singh, Aman Deep" <aman.deep.singh@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH v2] app/testpmd: fix issue with memory leaks when quit testpmd
Date: Tue, 15 Mar 2022 10:06:05 +0000	[thread overview]
Message-ID: <DM6PR11MB35168EEDF5503C37E148FBF68E109@DM6PR11MB3516.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20220314055252.392004-1-ke1x.zhang@intel.com>

Hi Ke,

> -----Original Message-----
> From: Zhang, Ke1X <ke1x.zhang@intel.com>
> Sent: Monday, March 14, 2022 1:53 PM
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Singh, Aman Deep
> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>;
> dev@dpdk.org
> Cc: Zhang, Ke1X <ke1x.zhang@intel.com>
> Subject: [PATCH v2] app/testpmd: fix issue with memory leaks when quit
> testpmd
> 
> when dpdk is compiled in ASan, there is a memory leaks after quit testpmd if
> set mcast_addr, this patch fix this issue.
> 
> Error info as following:
> ERROR: LeakSanitizer: detected memory leaksDirect leak of
>        192 byte(s)
> 0 0x7f6a2e0aeffe in __interceptor_realloc
> 	(/lib/x86_64-linux-gnu/libasan.so.5+0x10dffe)
> 1 0x565361eb340f in mcast_addr_pool_extend
> 	../app/test-pmd/config.c:5162
> 2 0x565361eb3556 in mcast_addr_pool_append
> 	../app/test-pmd/config.c:5180
> 3 0x565361eb3aae in mcast_addr_add
> 	../app/test-pmd/config.c:5243
> 
> Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
> ---
>  app/test-pmd/testpmd.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> fe2ce19f99..f7e18aee25 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -3136,6 +3136,12 @@ close_port(portid_t pid)
>  			continue;
>  		}
> 
> +		if (port->mc_addr_nb != 0) {
> +			/* free the pool of multicast addresses. */
> +			free(port->mc_addr_pool);
> +			port->mc_addr_pool = NULL;
> +		}
> +

Is port->mc_addr_pool located in shared memory and may it be freed in primary process?
BTW,  you can write a function in config.c such as mcast_addr_pool_extend() and reference in close_port().

>  		if (is_proc_primary()) {
>  			port_flow_flush(pi);
>  			port_flex_item_flush(pi);
> --
> 2.25.1


  parent reply	other threads:[~2022-03-15 10:06 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-01  2:06 [PATCH] " Ke Zhang
2022-03-04 16:43 ` Ferruh Yigit
2022-03-08  6:05   ` Zhang, Ke1X
2022-03-14  5:52 ` [PATCH v2] " Ke Zhang
2022-03-14  9:10   ` David Marchand
2022-03-15 10:06   ` Zhang, Yuying [this message]
2022-03-25  8:35   ` [PATCH v3] " Ke Zhang
2022-04-04 15:34     ` Zhang, Yuying
2022-06-08 12:06       ` Ferruh Yigit
2022-03-14  5:17 [PATCH v2] " Ke Zhang
2022-03-14  5:34 Ke Zhang
2022-03-14  5:47 Ke Zhang
2022-03-15 10:00 ` Zhang, Yuying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR11MB35168EEDF5503C37E148FBF68E109@DM6PR11MB3516.namprd11.prod.outlook.com \
    --to=yuying.zhang@intel.com \
    --cc=aman.deep.singh@intel.com \
    --cc=dev@dpdk.org \
    --cc=ke1x.zhang@intel.com \
    --cc=xiaoyun.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).