DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 3/8] ipc: fix vdev memleak
@ 2019-04-17 14:38 Herakliusz Lipiec
  2019-04-17 14:38 ` Herakliusz Lipiec
  0 siblings, 1 reply; 2+ messages in thread
From: Herakliusz Lipiec @ 2019-04-17 14:38 UTC (permalink / raw)
  To: dev; +Cc: Herakliusz Lipiec, jianfeng.tan, stable

When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true. Fixed by always freeing the reply message buffers.

Fixes: cdb068f031c6 ("bus/vdev: scan by multi-process channel")
Cc: jianfeng.tan@intel.com
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
---
 drivers/bus/vdev/vdev.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 04f76a63f..7c43f2ddd 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -429,10 +429,9 @@ vdev_scan(void)
 			mp_rep = &mp_reply.msgs[0];
 			resp = (struct vdev_param *)mp_rep->param;
 			VDEV_LOG(INFO, "Received %d vdevs", resp->num);
-			free(mp_reply.msgs);
 		} else
 			VDEV_LOG(ERR, "Failed to request vdev from primary");
-
+		free(mp_reply.msgs);
 		/* Fall through to allow private vdevs in secondary process */
 	}
 
-- 
2.17.2

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-dev] [PATCH 3/8] ipc: fix vdev memleak
  2019-04-17 14:38 [dpdk-dev] [PATCH 3/8] ipc: fix vdev memleak Herakliusz Lipiec
@ 2019-04-17 14:38 ` Herakliusz Lipiec
  0 siblings, 0 replies; 2+ messages in thread
From: Herakliusz Lipiec @ 2019-04-17 14:38 UTC (permalink / raw)
  To: dev; +Cc: Herakliusz Lipiec, jianfeng.tan, stable

When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true. Fixed by always freeing the reply message buffers.

Fixes: cdb068f031c6 ("bus/vdev: scan by multi-process channel")
Cc: jianfeng.tan@intel.com
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
---
 drivers/bus/vdev/vdev.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 04f76a63f..7c43f2ddd 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -429,10 +429,9 @@ vdev_scan(void)
 			mp_rep = &mp_reply.msgs[0];
 			resp = (struct vdev_param *)mp_rep->param;
 			VDEV_LOG(INFO, "Received %d vdevs", resp->num);
-			free(mp_reply.msgs);
 		} else
 			VDEV_LOG(ERR, "Failed to request vdev from primary");
-
+		free(mp_reply.msgs);
 		/* Fall through to allow private vdevs in secondary process */
 	}
 
-- 
2.17.2


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-04-17 14:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-17 14:38 [dpdk-dev] [PATCH 3/8] ipc: fix vdev memleak Herakliusz Lipiec
2019-04-17 14:38 ` Herakliusz Lipiec

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).