DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Peng, ZhihongX" <zhihongx.peng@intel.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "Wang, Haiyue" <haiyue.wang@intel.com>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Lin, Xueqin" <xueqin.lin@intel.com>,
	"Yu, PingX" <pingx.yu@intel.com>
Subject: Re: [dpdk-dev] [RFC] mem_debug add more log
Date: Mon, 21 Dec 2020 07:35:10 +0000	[thread overview]
Message-ID: <fb3460d7ae74490e8bb098e610c369ac@intel.com> (raw)
In-Reply-To: <20201218105424.6731d866@hermes.local>

1. I think this implement doesn't add significant overhead. Overhead only will be occurred in rte_malloc and rte_free.

2. Current existing address sanitizer infrastructure only support libc malloc.

Regards,
Peng,Zhihong

-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org> 
Sent: Saturday, December 19, 2020 2:54 AM
To: Peng, ZhihongX <zhihongx.peng@intel.com>
Cc: Wang, Haiyue <haiyue.wang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; dev@dpdk.org
Subject: Re: [dpdk-dev] [RFC] mem_debug add more log

On Fri, 18 Dec 2020 14:21:09 -0500
Peng Zhihong <zhihongx.peng@intel.com> wrote:

> 1. The debugging log in current DPDK RTE_MALLOC_DEBUG mode is insufficient,
>    which makes it difficult to locate the issues, such as:
>    a) When a memeory overlflow occur in rte_free, there is a little log
>       information. Even if abort here, we can find which API is core
>       dumped but we still need to read the source code to find out where
>       the requested memory is overflowed.
>    b) Current DPDK can NOT find that the overflow if the memory has been
>       used and not released.
>    c) If there are two pieces of continuous memory, when the first block
>       is not released and an overflow is occured and also the second block
>       of memory is covered, a memory overflow will be detected once the second
>       block of memory is released. However, current DPDK can not find the
>       correct point of memory overflow. It only detect the memory overflow
>       of the second block but should dedect the one of first block.
>       ----------------------------------------------------------------------------------
>       | header cookie | data1 | trailer cookie | header cookie | data2 |trailer cookie |
>       
> ----------------------------------------------------------------------
> ------------ 2. To fix above issues, we can store the requested 
> information When DPDK
>    request memory. Including the requested address and requested momory's
>    file, function and numbers of rows and then put it into a list.
>    --------------------     ----------------------     ----------------------
>    | struct list_head |---->| struct malloc_info |---->| struct malloc_info |
>    --------------------     ----------------------     ----------------------
>    The above 3 problems can be solved through this implementation:
>    a) If there is a memory overflow in rte_free, you can traverse the
>       list to find the information of overflow memory and print the
>       overflow memory information. like this:
>       code:
>       37         char *p = rte_zmalloc(NULL, 64, 0);
>       38         memset(p, 0, 65);
>       39         rte_free(p);
>       40         //rte_malloc_validate_all_memory();
>       memory error:
>       EAL: Error: Invalid memory
>       malloc memory address 0x17ff2c340 overflow in \
>       file:../examples/helloworld/main.c function:main line:37
>    b)c) Provide a interface to check all memory overflow in function
>       rte_malloc_validate_all_memory, this function will check all
>       memory on the list. Call this funcation manually at the exit
>       point of business logic, we can find all overflow points in time.
> 
> Signed-off-by: Peng Zhihong <zhihongx.peng@intel.com>

Good concept, but doesn't this add significant overhead?

Maybe we could make rte_malloc work with existing address sanitizer infrastructure in gcc/clang?  That would provide faster and more immediate better diagnostic info.

  reply	other threads:[~2020-12-21  7:35 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-18 19:21 Peng Zhihong
2020-12-18 18:54 ` Stephen Hemminger
2020-12-21  7:35   ` Peng, ZhihongX [this message]
2020-12-21 18:44     ` Stephen Hemminger
2020-12-25  7:20       ` Peng, ZhihongX
2023-06-12  2:17         ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fb3460d7ae74490e8bb098e610c369ac@intel.com \
    --to=zhihongx.peng@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=haiyue.wang@intel.com \
    --cc=pingx.yu@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=stephen@networkplumber.org \
    --cc=xueqin.lin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).