From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A7FEA00BE for ; Thu, 21 Apr 2022 11:38:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3DFF3410FB; Thu, 21 Apr 2022 11:38:00 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id BD97E40040 for ; Thu, 21 Apr 2022 11:37:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650533877; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=14b8VMOXUIBzarJrtxKTIQLT5ggu41ysNSKtEu/11T4=; b=eNn7Wx+pKOz3PJWkymGGna6XCyqyTamxiVrAHcuXub1SF2KBESVc1qlMbNWzIKPyvz2ZoT FMu2FMASMeibL1rPHZ2PvjSAnhu9e+PUu50zOxfNCK7l4LsSikqNk3FFnRBqGK0PyqLVdW 93uHh563nr/6JnFp61ynMVmQtHseqfE= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-vNFXeu7aMiqXDKa1ag3SFA-1; Thu, 21 Apr 2022 05:37:56 -0400 X-MC-Unique: vNFXeu7aMiqXDKa1ag3SFA-1 Received: by mail-lf1-f71.google.com with SMTP id f11-20020ac2508b000000b00470122dd431so1639768lfm.2 for ; Thu, 21 Apr 2022 02:37:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=14b8VMOXUIBzarJrtxKTIQLT5ggu41ysNSKtEu/11T4=; b=eTPWyMm68oDbvOEGoOi4/j6XRyTWCccUYeFbABWY+5eajk6ah242Q4qsOP8AYrNzqO Jr618L4XCv8EiEqGLGQZ82hG5mXII6war09HG7nBrR3EIY2furtrFFZtyXqEDXftRclx LupJHdxqx1x8V+nZQoE1K6hujctwQBEDM6zDgMJqH8fOhQqShlVAxUEzUA29ZgcQWnVh SEvH54pHS4yxuTCGih6RebqE3I4qmXXpag7l5Dy83NtxOPP+mAW/HRyheRMmGX6XYIQb VQ3SsYTdh5pMpyUR+lteay2HJzRy1XQ1EP/OXpIMN/3V4aDqobBEA/yobTBlI4iFs1vJ obqA== X-Gm-Message-State: AOAM530olvBQPa8XnQjtTtUGNkl7zsgwJ9KvA70+qrKquq7uK9yv2RpF bQfSqSauJxblvLJeBK+ot636ts9YOXjcI4dfZDKnXO7cbZoSxMzZtyvLSvM6mDO8tRY3EOY9cPK Vrc67/rjpwVQUA943XcmXRdw= X-Received: by 2002:a05:6512:1052:b0:471:bffd:c8de with SMTP id c18-20020a056512105200b00471bffdc8demr4737419lfb.553.1650533874651; Thu, 21 Apr 2022 02:37:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwDveTCnbuw5lMn4sZj3iu3epzt5VKja71r4rCY2K4AiFPZm6JdmcWeBYUMFG9YeOaRhJ4N1TujhORY7DIU4D0= X-Received: by 2002:a05:6512:1052:b0:471:bffd:c8de with SMTP id c18-20020a056512105200b00471bffdc8demr4737400lfb.553.1650533874308; Thu, 21 Apr 2022 02:37:54 -0700 (PDT) MIME-Version: 1.0 References: <20220415173127.3838-1-david.marchand@redhat.com> <20220415173127.3838-3-david.marchand@redhat.com> <419bb7fc-cb04-10cf-a40a-5dba39323f9e@intel.com> In-Reply-To: <419bb7fc-cb04-10cf-a40a-5dba39323f9e@intel.com> From: David Marchand Date: Thu, 21 Apr 2022 11:37:43 +0200 Message-ID: Subject: Re: [PATCH 2/3] mem: fix ASan shadow for remapped memory segments To: "Burakov, Anatoly" Cc: dev , "Mcnamara, John" , Dmitry Kozlyuk , dpdk stable , Xueqin Lin Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Wed, Apr 20, 2022 at 4:47 PM Burakov, Anatoly wrote: > > On 15-Apr-22 6:31 PM, David Marchand wrote: > > When releasing some memory, the allocator can choose to return some > > pages to the OS. At the same time, this memory was poisoned in ASAn > > shadow. Doing the latter made it impossible to remap this same page > > later. > > On the other hand, without this poison, the OS would pagefault in any > > case for this page. > > > > Remove the poisoning for unmapped pages. > > > > Bugzilla ID: 994 > > Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan") > > Cc: stable@dpdk.org > > > > Signed-off-by: David Marchand > > --- > > lib/eal/common/malloc_elem.h | 4 ++++ > > lib/eal/common/malloc_heap.c | 12 +++++++++++- > > 2 files changed, 15 insertions(+), 1 deletion(-) > > > > diff --git a/lib/eal/common/malloc_elem.h b/lib/eal/common/malloc_elem.h > > index 228f178418..b859003722 100644 > > --- a/lib/eal/common/malloc_elem.h > > +++ b/lib/eal/common/malloc_elem.h > > @@ -272,6 +272,10 @@ old_malloc_size(struct malloc_elem *elem) > > > > #else /* !RTE_MALLOC_ASAN */ > > > > +static inline void > > +asan_set_zone(void *ptr __rte_unused, size_t len __rte_unused, > > + uint32_t val __rte_unused) { } > > + > > static inline void > > asan_set_freezone(void *ptr __rte_unused, size_t size __rte_unused) { } > > > > diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c > > index 6c572b6f2c..5913d9f862 100644 > > --- a/lib/eal/common/malloc_heap.c > > +++ b/lib/eal/common/malloc_heap.c > > @@ -860,6 +860,7 @@ malloc_heap_free(struct malloc_elem *elem) > > size_t len, aligned_len, page_sz; > > struct rte_memseg_list *msl; > > unsigned int i, n_segs, before_space, after_space; > > + bool unmapped_pages = false; > > int ret; > > const struct internal_config *internal_conf = > > eal_get_internal_configuration(); > > @@ -999,6 +1000,13 @@ malloc_heap_free(struct malloc_elem *elem) > > > > /* don't care if any of this fails */ > > malloc_heap_free_pages(aligned_start, aligned_len); > > + /* > > + * Clear any poisoning in ASan for the associated pages so that > > + * next time EAL maps those pages, the allocator can access > > + * them. > > + */ > > + asan_set_zone(aligned_start, aligned_len, 0x00); > > + unmapped_pages = true; > > > > request_sync(); > > } else { > > @@ -1032,7 +1040,9 @@ malloc_heap_free(struct malloc_elem *elem) > > > > rte_mcfg_mem_write_unlock(); > > free_unlock: > > - asan_set_freezone(asan_ptr, asan_data_len); > > + /* Poison memory range if belonging to some still mapped pages. */ > > + if (!unmapped_pages) > > + asan_set_freezone(asan_ptr, asan_data_len); > > > > rte_spinlock_unlock(&(heap->lock)); > > return ret; > > I suspect the patch should be a little more complicated than that. When > we unmap pages, we don't necessarily unmap the entire malloc element, it > could be that we have a freed allocation like so: > > | malloc header | free space | unmapped space | free space | next malloc > header | > > So, i think the freezone should be set from asan_ptr till aligned_start, > and then from (aligned_start + aligned_len) till (asan_ptr + > asan_data_len). Does that make sense? (btw, I get a bounce for Zhihong mail address, is he not working at Intel anymore?) To be honest, I don't understand if we can get to this situation :-) (especially the free space after the unmapped region). But I guess you mean something like (on top of current patch): @@ -1040,9 +1040,25 @@ malloc_heap_free(struct malloc_elem *elem) rte_mcfg_mem_write_unlock(); free_unlock: - /* Poison memory range if belonging to some still mapped pages. */ - if (!unmapped_pages) + if (!unmapped_pages) { asan_set_freezone(asan_ptr, asan_data_len); + } else { + /* + * We may be in a situation where we unmapped pages like this: + * malloc header | free space | unmapped space | free space | malloc header + */ + void *free1_start = asan_ptr; + void *free1_end = aligned_start; + void *free2_start = RTE_PTR_ADD(aligned_start, aligned_len); + void *free2_end = RTE_PTR_ADD(asan_ptr, asan_data_len); + + if (free1_start < free1_end) + asan_set_freezone(free1_start, + RTE_PTR_DIFF(free1_end, free1_start)); + if (free2_start < free2_end) + asan_set_freezone(free2_start, + RTE_PTR_DIFF(free2_end, free2_start)); + } rte_spinlock_unlock(&(heap->lock)); return ret; -- David Marchand