* [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak
@ 2019-04-17 14:41 Herakliusz Lipiec
2019-04-17 14:41 ` Herakliusz Lipiec
2019-04-18 10:11 ` Pattan, Reshma
0 siblings, 2 replies; 4+ messages in thread
From: Herakliusz Lipiec @ 2019-04-17 14:41 UTC (permalink / raw)
To: reshma.pattan; +Cc: dev, Herakliusz Lipiec, jianfeng.tan, stable
When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true. Fixed by always freeing reply message buffers.
Fixes: 660098d61f57 ("pdump: use generic multi-process channel")
Cc: jianfeng.tan@intel.com
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
---
lib/librte_pdump/rte_pdump.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_pdump/rte_pdump.c b/lib/librte_pdump/rte_pdump.c
index 14744b9ff..3787c3e32 100644
--- a/lib/librte_pdump/rte_pdump.c
+++ b/lib/librte_pdump/rte_pdump.c
@@ -525,8 +525,8 @@ pdump_prepare_client_request(char *device, uint16_t queue,
rte_errno = resp->err_value;
if (!resp->err_value)
ret = 0;
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
if (ret < 0)
RTE_LOG(ERR, PDUMP,
--
2.17.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak
2019-04-17 14:41 [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak Herakliusz Lipiec
@ 2019-04-17 14:41 ` Herakliusz Lipiec
2019-04-18 10:11 ` Pattan, Reshma
1 sibling, 0 replies; 4+ messages in thread
From: Herakliusz Lipiec @ 2019-04-17 14:41 UTC (permalink / raw)
To: reshma.pattan; +Cc: dev, Herakliusz Lipiec, jianfeng.tan, stable
When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true. Fixed by always freeing reply message buffers.
Fixes: 660098d61f57 ("pdump: use generic multi-process channel")
Cc: jianfeng.tan@intel.com
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
---
lib/librte_pdump/rte_pdump.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_pdump/rte_pdump.c b/lib/librte_pdump/rte_pdump.c
index 14744b9ff..3787c3e32 100644
--- a/lib/librte_pdump/rte_pdump.c
+++ b/lib/librte_pdump/rte_pdump.c
@@ -525,8 +525,8 @@ pdump_prepare_client_request(char *device, uint16_t queue,
rte_errno = resp->err_value;
if (!resp->err_value)
ret = 0;
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
if (ret < 0)
RTE_LOG(ERR, PDUMP,
--
2.17.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak
2019-04-17 14:41 [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak Herakliusz Lipiec
2019-04-17 14:41 ` Herakliusz Lipiec
@ 2019-04-18 10:11 ` Pattan, Reshma
2019-04-18 10:11 ` Pattan, Reshma
1 sibling, 1 reply; 4+ messages in thread
From: Pattan, Reshma @ 2019-04-18 10:11 UTC (permalink / raw)
To: Lipiec, Herakliusz; +Cc: dev, jianfeng.tan, stable
> -----Original Message-----
> From: Lipiec, Herakliusz
> Sent: Wednesday, April 17, 2019 3:42 PM
> To: Pattan, Reshma <reshma.pattan@intel.com>
> Cc: dev@dpdk.org; Lipiec, Herakliusz <herakliusz.lipiec@intel.com>;
> jianfeng.tan@intel.com; stable@dpdk.org
> Subject: [PATCH 5/8] ipc: fix pdump memleak
>
> When sending multiple requests, rte_mp_request_sync can succeed sending a
> few of those requests, but then fail on a later one and in the end return with rc=-
> 1.
> The upper layers - e.g. device hotplug - currently handles this case as if no
> messages were sent and no memory for response buffers was allocated, which
> is not true. Fixed by always freeing reply message buffers.
>
> Fixes: 660098d61f57 ("pdump: use generic multi-process channel")
> Cc: jianfeng.tan@intel.com
> Cc: stable@dpdk.org
> Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
Might need to add Bugzilla id in commit message. Other than that,
Acked-By: Reshma Pattan <reshma.pattan@intel.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak
2019-04-18 10:11 ` Pattan, Reshma
@ 2019-04-18 10:11 ` Pattan, Reshma
0 siblings, 0 replies; 4+ messages in thread
From: Pattan, Reshma @ 2019-04-18 10:11 UTC (permalink / raw)
To: Lipiec, Herakliusz; +Cc: dev, jianfeng.tan, stable
> -----Original Message-----
> From: Lipiec, Herakliusz
> Sent: Wednesday, April 17, 2019 3:42 PM
> To: Pattan, Reshma <reshma.pattan@intel.com>
> Cc: dev@dpdk.org; Lipiec, Herakliusz <herakliusz.lipiec@intel.com>;
> jianfeng.tan@intel.com; stable@dpdk.org
> Subject: [PATCH 5/8] ipc: fix pdump memleak
>
> When sending multiple requests, rte_mp_request_sync can succeed sending a
> few of those requests, but then fail on a later one and in the end return with rc=-
> 1.
> The upper layers - e.g. device hotplug - currently handles this case as if no
> messages were sent and no memory for response buffers was allocated, which
> is not true. Fixed by always freeing reply message buffers.
>
> Fixes: 660098d61f57 ("pdump: use generic multi-process channel")
> Cc: jianfeng.tan@intel.com
> Cc: stable@dpdk.org
> Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
Might need to add Bugzilla id in commit message. Other than that,
Acked-By: Reshma Pattan <reshma.pattan@intel.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-04-18 10:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-17 14:41 [dpdk-dev] [PATCH 5/8] ipc: fix pdump memleak Herakliusz Lipiec
2019-04-17 14:41 ` Herakliusz Lipiec
2019-04-18 10:11 ` Pattan, Reshma
2019-04-18 10:11 ` Pattan, Reshma
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).