* [dpdk-dev] [Bug 228] rte_mp_request_sync memleak with multiple recipients
@ 2019-03-20 10:50 bugzilla
2019-03-20 10:50 ` bugzilla
0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2019-03-20 10:50 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=228
Bug ID: 228
Summary: rte_mp_request_sync memleak with multiple recipients
Product: DPDK
Version: unspecified
Hardware: All
OS: All
Status: CONFIRMED
Severity: minor
Priority: Normal
Component: core
Assignee: dev@dpdk.org
Reporter: dariusz.stojaczyk@intel.com
Target Milestone: ---
When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true.
Fixing the above is not so obvious, as rte_mp_request_sync
can fail before the response buffer pointer is even
set (it is unitialized by default). So the caller
cannot safely access it (or free it) just if the
function returns with rc=-1. One way to fix this is
to always initialize the response buffer pointer and
always require the caller to free the response buffer.
This, however, asks for a redesign and should be
addressed with slightly more effort put into it.
Maybe the response buffer pointer should be initialized
by the caller from the very beginning?
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] 2+ messages in thread
* [dpdk-dev] [Bug 228] rte_mp_request_sync memleak with multiple recipients
2019-03-20 10:50 [dpdk-dev] [Bug 228] rte_mp_request_sync memleak with multiple recipients bugzilla
@ 2019-03-20 10:50 ` bugzilla
0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2019-03-20 10:50 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=228
Bug ID: 228
Summary: rte_mp_request_sync memleak with multiple recipients
Product: DPDK
Version: unspecified
Hardware: All
OS: All
Status: CONFIRMED
Severity: minor
Priority: Normal
Component: core
Assignee: dev@dpdk.org
Reporter: dariusz.stojaczyk@intel.com
Target Milestone: ---
When sending multiple requests, rte_mp_request_sync
can succeed sending a few of those requests, but then
fail on a later one and in the end return with rc=-1.
The upper layers - e.g. device hotplug - currently
handles this case as if no messages were sent and no
memory for response buffers was allocated, which is
not true.
Fixing the above is not so obvious, as rte_mp_request_sync
can fail before the response buffer pointer is even
set (it is unitialized by default). So the caller
cannot safely access it (or free it) just if the
function returns with rc=-1. One way to fix this is
to always initialize the response buffer pointer and
always require the caller to free the response buffer.
This, however, asks for a redesign and should be
addressed with slightly more effort put into it.
Maybe the response buffer pointer should be initialized
by the caller from the very beginning?
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-03-20 10:50 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-20 10:50 [dpdk-dev] [Bug 228] rte_mp_request_sync memleak with multiple recipients bugzilla
2019-03-20 10:50 ` bugzilla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).