From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 901D8A0093 for ; Thu, 21 Apr 2022 11:51:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8196240040; Thu, 21 Apr 2022 11:51:05 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 4B80140040 for ; Thu, 21 Apr 2022 11:51:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650534663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=pnGF64R8j0YVEZpZ2w7d6WvL05XruMLDCgC3Tu94TJc=; b=bru4ljjo4M2ftQzKG+oBHKFhs5YaE7Nt4ZRsFV3h0MR9v1kwvoZZN7GjhXmngw2P9Dhq0R 2/8Dgj07PSobjaBEGQ9Hk5P8XROBo5ZPVYdQJlgCzg2wEOXX4NsIx28N03XwIX/ZX4DW3Y 8L9VI1a38uH8yfT8CaIJob/IMyEwcyY= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-28-zBwTKMHSNvmGqRrrD4k6cA-1; Thu, 21 Apr 2022 05:51:00 -0400 X-MC-Unique: zBwTKMHSNvmGqRrrD4k6cA-1 Received: by mail-lf1-f71.google.com with SMTP id p6-20020a056512138600b0046b9141a66dso1637260lfa.22 for ; Thu, 21 Apr 2022 02:51:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pnGF64R8j0YVEZpZ2w7d6WvL05XruMLDCgC3Tu94TJc=; b=q4MYI5cwpnShx3OJB3FOkn21/fba24Q/shWd/NU8t39C8kgoHwFpw6tRxpT2w+LwJm Zz9HBQVoi9qiWpbckEaT8ayPKUR2eC6o6j01yShHaj+ErwZCK8xpK3FckbV6t9lax7Gp /RXF+YaL8i2M2Eb+tF876SBlVNT404/uhnNI2A/QpV1RIdjALCSMevpc8s2rMn77FT2Z kNjuFiPeG+R6uB8uKogT5sAOMt5dAagdB1XMDlA/V9f6xAUsCMcG8Azzi2jKrLt/Ne1w YvAHx36iTx/OyxguYzQ/+WPk6V4iYkIvM7NfztZZYtDtY56ToMsch58e2mv+9RmhuptS GIPQ== X-Gm-Message-State: AOAM531g3g2Y0S6C0XqYqw0vphWTy3E8Hf58oTF9//up6i3AeR4oWFto LSvr0mDmPyHMKEtWmoYCy+H2zDwQV/fu0kVb12eYvuuUwo0cbvOnrutDAmijKV8xxflQxiNN+ni GTNytMEa59hWpkB8fhY6rBWs= X-Received: by 2002:a05:6512:1052:b0:471:bffd:c8de with SMTP id c18-20020a056512105200b00471bffdc8demr4768199lfb.553.1650534659178; Thu, 21 Apr 2022 02:50:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxmtse0mfK0TrMRVOseuOTBdXuM0bqZNLVUILWFnxfpfTm20r7RzyRLeLm3xvE6G4i4KeN6t/6OJNZ3+9uGWTQ= X-Received: by 2002:a05:6512:1052:b0:471:bffd:c8de with SMTP id c18-20020a056512105200b00471bffdc8demr4768181lfb.553.1650534658907; Thu, 21 Apr 2022 02:50:58 -0700 (PDT) MIME-Version: 1.0 References: <20220415173127.3838-1-david.marchand@redhat.com> <20220415173127.3838-3-david.marchand@redhat.com> <419bb7fc-cb04-10cf-a40a-5dba39323f9e@intel.com> In-Reply-To: From: David Marchand Date: Thu, 21 Apr 2022 11:50:47 +0200 Message-ID: Subject: Re: [PATCH 2/3] mem: fix ASan shadow for remapped memory segments To: "Burakov, Anatoly" Cc: dev , "Mcnamara, John" , Dmitry Kozlyuk , dpdk stable , Xueqin Lin Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Thu, Apr 21, 2022 at 11:37 AM David Marchand wrote: > > On Wed, Apr 20, 2022 at 4:47 PM Burakov, Anatoly > wrote: > > > > On 15-Apr-22 6:31 PM, David Marchand wrote: > > > When releasing some memory, the allocator can choose to return some > > > pages to the OS. At the same time, this memory was poisoned in ASAn > > > shadow. Doing the latter made it impossible to remap this same page > > > later. > > > On the other hand, without this poison, the OS would pagefault in any > > > case for this page. > > > > > > Remove the poisoning for unmapped pages. > > > > > > Bugzilla ID: 994 > > > Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan") > > > Cc: stable@dpdk.org > > > > > > Signed-off-by: David Marchand > > > --- > > > lib/eal/common/malloc_elem.h | 4 ++++ > > > lib/eal/common/malloc_heap.c | 12 +++++++++++- > > > 2 files changed, 15 insertions(+), 1 deletion(-) > > > > > > diff --git a/lib/eal/common/malloc_elem.h b/lib/eal/common/malloc_elem.h > > > index 228f178418..b859003722 100644 > > > --- a/lib/eal/common/malloc_elem.h > > > +++ b/lib/eal/common/malloc_elem.h > > > @@ -272,6 +272,10 @@ old_malloc_size(struct malloc_elem *elem) > > > > > > #else /* !RTE_MALLOC_ASAN */ > > > > > > +static inline void > > > +asan_set_zone(void *ptr __rte_unused, size_t len __rte_unused, > > > + uint32_t val __rte_unused) { } > > > + > > > static inline void > > > asan_set_freezone(void *ptr __rte_unused, size_t size __rte_unused) { } > > > > > > diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c > > > index 6c572b6f2c..5913d9f862 100644 > > > --- a/lib/eal/common/malloc_heap.c > > > +++ b/lib/eal/common/malloc_heap.c > > > @@ -860,6 +860,7 @@ malloc_heap_free(struct malloc_elem *elem) > > > size_t len, aligned_len, page_sz; > > > struct rte_memseg_list *msl; > > > unsigned int i, n_segs, before_space, after_space; > > > + bool unmapped_pages = false; > > > int ret; > > > const struct internal_config *internal_conf = > > > eal_get_internal_configuration(); > > > @@ -999,6 +1000,13 @@ malloc_heap_free(struct malloc_elem *elem) > > > > > > /* don't care if any of this fails */ > > > malloc_heap_free_pages(aligned_start, aligned_len); > > > + /* > > > + * Clear any poisoning in ASan for the associated pages so that > > > + * next time EAL maps those pages, the allocator can access > > > + * them. > > > + */ > > > + asan_set_zone(aligned_start, aligned_len, 0x00); > > > + unmapped_pages = true; > > > > > > request_sync(); > > > } else { > > > @@ -1032,7 +1040,9 @@ malloc_heap_free(struct malloc_elem *elem) > > > > > > rte_mcfg_mem_write_unlock(); > > > free_unlock: > > > - asan_set_freezone(asan_ptr, asan_data_len); > > > + /* Poison memory range if belonging to some still mapped pages. */ > > > + if (!unmapped_pages) > > > + asan_set_freezone(asan_ptr, asan_data_len); > > > > > > rte_spinlock_unlock(&(heap->lock)); > > > return ret; > > > > I suspect the patch should be a little more complicated than that. When > > we unmap pages, we don't necessarily unmap the entire malloc element, it > > could be that we have a freed allocation like so: > > > > | malloc header | free space | unmapped space | free space | next malloc > > header | > > > > So, i think the freezone should be set from asan_ptr till aligned_start, > > and then from (aligned_start + aligned_len) till (asan_ptr + > > asan_data_len). Does that make sense? > > (btw, I get a bounce for Zhihong mail address, is he not working at > Intel anymore?) > > To be honest, I don't understand if we can get to this situation :-) > (especially the free space after the unmapped region). > But I guess you mean something like (on top of current patch): > > @@ -1040,9 +1040,25 @@ malloc_heap_free(struct malloc_elem *elem) > > rte_mcfg_mem_write_unlock(); > free_unlock: > - /* Poison memory range if belonging to some still mapped pages. */ > - if (!unmapped_pages) > + if (!unmapped_pages) { > asan_set_freezone(asan_ptr, asan_data_len); > + } else { > + /* > + * We may be in a situation where we unmapped pages like this: > + * malloc header | free space | unmapped space | free > space | malloc header > + */ > + void *free1_start = asan_ptr; > + void *free1_end = aligned_start; > + void *free2_start = RTE_PTR_ADD(aligned_start, aligned_len); > + void *free2_end = RTE_PTR_ADD(asan_ptr, asan_data_len); > + > + if (free1_start < free1_end) > + asan_set_freezone(free1_start, > + RTE_PTR_DIFF(free1_end, free1_start)); > + if (free2_start < free2_end) > + asan_set_freezone(free2_start, > + RTE_PTR_DIFF(free2_end, free2_start)); > + } > > rte_spinlock_unlock(&(heap->lock)); > return ret; But I get a splat for func_reentrancy_autotest: ================================================================= ==4098809==ERROR: AddressSanitizer: heap-use-after-free on address 0x7fa00fa00030 at pc 0x0000035779fe bp 0x7ffe01ed0cf0 sp 0x7ffe01ed0ce8 READ of size 1 at 0x7fa00fa00030 thread T0 #0 0x35779fd in malloc_elem_join_adjacent_free ../lib/eal/common/malloc_elem.c:539 #1 0x3577bc5 in malloc_elem_free ../lib/eal/common/malloc_elem.c:586 #2 0x357bd25 in malloc_heap_free ../lib/eal/common/malloc_heap.c:886 #3 0x357e032 in mem_free ../lib/eal/common/rte_malloc.c:37 #4 0x357e032 in rte_free ../lib/eal/common/rte_malloc.c:44 #5 0x336a547 in rte_lpm_free ../lib/lpm/rte_lpm.c:281 #6 0x2cb3488 in lpm_create_free ../app/test/test_func_reentrancy.c:395 #7 0x2cb47ed in launch_test ../app/test/test_func_reentrancy.c:455 #8 0x2cb47ed in test_func_reentrancy ../app/test/test_func_reentrancy.c:502 #9 0x2b0dd75 in cmd_autotest_parsed ../app/test/commands.c:68 #10 0x34e5dce in cmdline_parse ../lib/cmdline/cmdline_parse.c:287 #11 0x34e31af in cmdline_valid_buffer ../lib/cmdline/cmdline.c:24 #12 0x34eba9f in rdline_char_in ../lib/cmdline/cmdline_rdline.c:444 #13 0x34e329b in cmdline_in ../lib/cmdline/cmdline.c:146 #14 0x34e329b in cmdline_in ../lib/cmdline/cmdline.c:135 #15 0x40d8a5 in main ../app/test/test.c:217 #16 0x7fa41009b55f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 #17 0x7fa41009b60b in __libc_start_main_impl ../csu/libc-start.c:409 #18 0x2b0dc64 in _start (/home/dmarchan/dpdk/build-gcc-asan/app/test/dpdk-test+0x2b0dc64) Address 0x7fa00fa00030 is a wild pointer. SUMMARY: AddressSanitizer: heap-use-after-free ../lib/eal/common/malloc_elem.c:539 in malloc_elem_join_adjacent_free Shadow bytes around the buggy address: 0x0ff481f37fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0ff481f37fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0ff481f37fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0ff481f37fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0ff481f37ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0ff481f38000: fd fd fd fd fd fd[fd]fd fd fd fd fd fd fd fd fd 0x0ff481f38010: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0ff481f38020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0ff481f38030: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0ff481f38040: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0ff481f38050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): -- David Marchand