* [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak
@ 2019-04-17 14:41 Herakliusz Lipiec
2019-04-17 14:41 ` Herakliusz Lipiec
2019-04-23 8:29 ` Burakov, Anatoly
0 siblings, 2 replies; 4+ messages in thread
From: Herakliusz Lipiec @ 2019-04-17 14:41 UTC (permalink / raw)
To: anatoly.burakov; +Cc: dev, Herakliusz Lipiec, jianfeng.tan, stable
When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true. Fixed by always freeing reply message buffers.
Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
Cc: jianfeng.tan@intel.com
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
---
lib/librte_eal/linux/eal/eal_vfio.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c
index 19e70bb66..d293df062 100644
--- a/lib/librte_eal/linux/eal/eal_vfio.c
+++ b/lib/librte_eal/linux/eal/eal_vfio.c
@@ -319,8 +319,8 @@ vfio_open_group_fd(int iommu_group_num)
RTE_LOG(ERR, EAL, " bad VFIO group fd\n");
vfio_group_fd = 0;
}
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
if (vfio_group_fd < 0)
RTE_LOG(ERR, EAL, " cannot request group fd\n");
@@ -583,8 +583,8 @@ vfio_sync_default_container(void)
p = (struct vfio_mp_param *)mp_rep->param;
if (p->result == SOCKET_OK)
iommu_type_id = p->iommu_type_id;
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
if (iommu_type_id < 0) {
RTE_LOG(ERR, EAL, "Could not get IOMMU type for default container\n");
return -1;
@@ -1050,8 +1050,8 @@ vfio_get_default_container_fd(void)
free(mp_reply.msgs);
return mp_rep->fds[0];
}
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
RTE_LOG(ERR, EAL, " cannot request default container fd\n");
return -1;
@@ -1182,8 +1182,8 @@ rte_vfio_get_container_fd(void)
free(mp_reply.msgs);
return vfio_container_fd;
}
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
RTE_LOG(ERR, EAL, " cannot request container fd\n");
return -1;
--
2.17.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak
2019-04-17 14:41 [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak Herakliusz Lipiec
@ 2019-04-17 14:41 ` Herakliusz Lipiec
2019-04-23 8:29 ` Burakov, Anatoly
1 sibling, 0 replies; 4+ messages in thread
From: Herakliusz Lipiec @ 2019-04-17 14:41 UTC (permalink / raw)
To: anatoly.burakov; +Cc: dev, Herakliusz Lipiec, jianfeng.tan, stable
When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true. Fixed by always freeing reply message buffers.
Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
Cc: jianfeng.tan@intel.com
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
---
lib/librte_eal/linux/eal/eal_vfio.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c
index 19e70bb66..d293df062 100644
--- a/lib/librte_eal/linux/eal/eal_vfio.c
+++ b/lib/librte_eal/linux/eal/eal_vfio.c
@@ -319,8 +319,8 @@ vfio_open_group_fd(int iommu_group_num)
RTE_LOG(ERR, EAL, " bad VFIO group fd\n");
vfio_group_fd = 0;
}
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
if (vfio_group_fd < 0)
RTE_LOG(ERR, EAL, " cannot request group fd\n");
@@ -583,8 +583,8 @@ vfio_sync_default_container(void)
p = (struct vfio_mp_param *)mp_rep->param;
if (p->result == SOCKET_OK)
iommu_type_id = p->iommu_type_id;
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
if (iommu_type_id < 0) {
RTE_LOG(ERR, EAL, "Could not get IOMMU type for default container\n");
return -1;
@@ -1050,8 +1050,8 @@ vfio_get_default_container_fd(void)
free(mp_reply.msgs);
return mp_rep->fds[0];
}
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
RTE_LOG(ERR, EAL, " cannot request default container fd\n");
return -1;
@@ -1182,8 +1182,8 @@ rte_vfio_get_container_fd(void)
free(mp_reply.msgs);
return vfio_container_fd;
}
- free(mp_reply.msgs);
}
+ free(mp_reply.msgs);
RTE_LOG(ERR, EAL, " cannot request container fd\n");
return -1;
--
2.17.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak
2019-04-17 14:41 [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak Herakliusz Lipiec
2019-04-17 14:41 ` Herakliusz Lipiec
@ 2019-04-23 8:29 ` Burakov, Anatoly
2019-04-23 8:29 ` Burakov, Anatoly
1 sibling, 1 reply; 4+ messages in thread
From: Burakov, Anatoly @ 2019-04-23 8:29 UTC (permalink / raw)
To: Herakliusz Lipiec; +Cc: dev, jianfeng.tan, stable
On 17-Apr-19 3:41 PM, Herakliusz Lipiec wrote:
> When sending multiple requests, rte_mp_request_sync
> can succeed sending a few of those requests, but then
> fail on a later one and in the end return with rc=-1.
> The upper layers - e.g. device hotplug - currently
> handles this case as if no messages were sent and no
> memory for response buffers was allocated, which is
> not true. Fixed by always freeing reply message buffers.
This commit message is duplicated in multiple commits, and does not
really describe the problem you're fixing. Since you've already updated
the IPC API docs to ask caller to free even in case of failure, i think
here (and in other similar patches) it is sufficient to just say
something like:
When sending synchronous IPC requests, the caller must free the response
buffer even if the request returned failure. Fix the code to correctly
use the IPC API.
>
> Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
> Cc: jianfeng.tan@intel.com
> Cc: stable@dpdk.org
> Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
> ---
For actual code,
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak
2019-04-23 8:29 ` Burakov, Anatoly
@ 2019-04-23 8:29 ` Burakov, Anatoly
0 siblings, 0 replies; 4+ messages in thread
From: Burakov, Anatoly @ 2019-04-23 8:29 UTC (permalink / raw)
To: Herakliusz Lipiec; +Cc: dev, jianfeng.tan, stable
On 17-Apr-19 3:41 PM, Herakliusz Lipiec wrote:
> When sending multiple requests, rte_mp_request_sync
> can succeed sending a few of those requests, but then
> fail on a later one and in the end return with rc=-1.
> The upper layers - e.g. device hotplug - currently
> handles this case as if no messages were sent and no
> memory for response buffers was allocated, which is
> not true. Fixed by always freeing reply message buffers.
This commit message is duplicated in multiple commits, and does not
really describe the problem you're fixing. Since you've already updated
the IPC API docs to ask caller to free even in case of failure, i think
here (and in other similar patches) it is sufficient to just say
something like:
When sending synchronous IPC requests, the caller must free the response
buffer even if the request returned failure. Fix the code to correctly
use the IPC API.
>
> Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
> Cc: jianfeng.tan@intel.com
> Cc: stable@dpdk.org
> Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
> ---
For actual code,
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-04-23 8:29 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-17 14:41 [dpdk-dev] [PATCH 4/8] ipc: fix vfio memleak Herakliusz Lipiec
2019-04-17 14:41 ` Herakliusz Lipiec
2019-04-23 8:29 ` Burakov, Anatoly
2019-04-23 8:29 ` Burakov, Anatoly
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).