patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH v1 1/1] malloc: fix ASan handling for unmapped memory
@ 2022-05-04 13:40 Anatoly Burakov
  2022-05-04 14:31 ` [PATCH v2 " Anatoly Burakov
  0 siblings, 1 reply; 3+ messages in thread
From: Anatoly Burakov @ 2022-05-04 13:40 UTC (permalink / raw)
  To: dev, Xueqin Lin, Zhihong Peng; +Cc: david.marchand, vladimir.medvedkin, stable

Currently, when we free previously allocated memory, we mark the area as
"freed" for ASan purposes (flag 0xfd). However, sometimes, freeing a
malloc element will cause pages to be unmapped from memory and re-backed
with anonymous memory again. This may cause ASan's "use-after-free"
error down the line, because the allocator will try to write into
memory areas recently marked as "freed".

To fix this, we need to mark the unmapped memory area as "available",
and fixup surrounding malloc element header/trailers to enable later
malloc routines to safely write into new malloc elements' headers or
trailers.

Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan")
Cc: zhihongx.peng@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 lib/eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 6c572b6f2c..a3d26fcbea 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -861,6 +861,7 @@ malloc_heap_free(struct malloc_elem *elem)
 	struct rte_memseg_list *msl;
 	unsigned int i, n_segs, before_space, after_space;
 	int ret;
+	bool unmapped = false;
 	const struct internal_config *internal_conf =
 		eal_get_internal_configuration();
 
@@ -1027,6 +1028,9 @@ malloc_heap_free(struct malloc_elem *elem)
 		request_to_primary(&req);
 	}
 
+	/* we didn't exit early, meaning we have unmapped some pages */
+	unmapped = true;
+
 	RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n",
 		msl->socket_id, aligned_len >> 20ULL);
 
@@ -1034,6 +1038,37 @@ malloc_heap_free(struct malloc_elem *elem)
 free_unlock:
 	asan_set_freezone(asan_ptr, asan_data_len);
 
+	/* if we unmapped some memory, we need to do additional work for ASan */
+	if (unmapped) {
+		void *asan_end = RTE_PTR_ADD(asan_ptr, asan_data_len);
+		void *aligned_end = RTE_PTR_ADD(aligned_start, aligned_len);
+		void *aligned_trailer = RTE_PTR_SUB(aligned_start,
+				MALLOC_ELEM_TRAILER_LEN);
+
+		/*
+		 * There was a memory area that was unmapped. This memory area
+		 * will have to be marked as available for ASan, because we will
+		 * want to use it next time it gets mapped again. The OS memory
+		 * protection should trigger a fault on access to these areas
+		 * anyway, so we are not giving up any protection.
+		 */
+		asan_set_zone(aligned_start, aligned_len, 0x00);
+
+		/*
+		 * ...however, when we unmap pages, we create new free elements
+		 * which might have been marked as "freed" with an earlier
+		 * `asan_set_freezone` call. So, if there is an area past the
+		 * unmapped space that was marked as freezone for ASan, we need
+		 * to mark the malloc header as available.
+		 */
+		if (asan_end > aligned_end)
+			asan_set_zone(aligned_end, MALLOC_ELEM_HEADER_LEN, 0x00);
+
+		/* if there's space before unmapped memory, mark as available */
+		if (asan_ptr < aligned_start)
+			asan_set_zone(aligned_trailer, MALLOC_ELEM_TRAILER_LEN, 0x00);
+	}
+
 	rte_spinlock_unlock(&(heap->lock));
 	return ret;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH v2 1/1] malloc: fix ASan handling for unmapped memory
  2022-05-04 13:40 [PATCH v1 1/1] malloc: fix ASan handling for unmapped memory Anatoly Burakov
@ 2022-05-04 14:31 ` Anatoly Burakov
  2022-05-05  9:10   ` David Marchand
  0 siblings, 1 reply; 3+ messages in thread
From: Anatoly Burakov @ 2022-05-04 14:31 UTC (permalink / raw)
  To: dev, Xueqin Lin, Zhihong Peng; +Cc: vladimir.medvedkin, david.marchand, stable

Currently, when we free previously allocated memory, we mark the area as
"freed" for ASan purposes (flag 0xfd). However, sometimes, freeing a
malloc element will cause pages to be unmapped from memory and re-backed
with anonymous memory again. This may cause ASan's "use-after-free"
error down the line, because the allocator will try to write into
memory areas recently marked as "freed".

To fix this, we need to mark the unmapped memory area as "available",
and fixup surrounding malloc element header/trailers to enable later
malloc routines to safely write into new malloc elements' headers or
trailers.

Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan")
Cc: zhihongx.peng@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v2:
    - Add missing dummy implementation for `asan_set_zone()`

 lib/eal/common/malloc_elem.h |  4 ++++
 lib/eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/lib/eal/common/malloc_elem.h b/lib/eal/common/malloc_elem.h
index f2aa98821b..c5f65895e1 100644
--- a/lib/eal/common/malloc_elem.h
+++ b/lib/eal/common/malloc_elem.h
@@ -278,6 +278,10 @@ old_malloc_size(struct malloc_elem *elem)
 
 #define __rte_no_asan
 
+static inline void
+asan_set_zone(void *ptr __rte_unused, size_t len __rte_unused,
+		uint32_t val __rte_unused) { }
+
 static inline void
 asan_set_freezone(void *ptr __rte_unused, size_t size __rte_unused) { }
 
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 6c572b6f2c..a3d26fcbea 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -861,6 +861,7 @@ malloc_heap_free(struct malloc_elem *elem)
 	struct rte_memseg_list *msl;
 	unsigned int i, n_segs, before_space, after_space;
 	int ret;
+	bool unmapped = false;
 	const struct internal_config *internal_conf =
 		eal_get_internal_configuration();
 
@@ -1027,6 +1028,9 @@ malloc_heap_free(struct malloc_elem *elem)
 		request_to_primary(&req);
 	}
 
+	/* we didn't exit early, meaning we have unmapped some pages */
+	unmapped = true;
+
 	RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n",
 		msl->socket_id, aligned_len >> 20ULL);
 
@@ -1034,6 +1038,37 @@ malloc_heap_free(struct malloc_elem *elem)
 free_unlock:
 	asan_set_freezone(asan_ptr, asan_data_len);
 
+	/* if we unmapped some memory, we need to do additional work for ASan */
+	if (unmapped) {
+		void *asan_end = RTE_PTR_ADD(asan_ptr, asan_data_len);
+		void *aligned_end = RTE_PTR_ADD(aligned_start, aligned_len);
+		void *aligned_trailer = RTE_PTR_SUB(aligned_start,
+				MALLOC_ELEM_TRAILER_LEN);
+
+		/*
+		 * There was a memory area that was unmapped. This memory area
+		 * will have to be marked as available for ASan, because we will
+		 * want to use it next time it gets mapped again. The OS memory
+		 * protection should trigger a fault on access to these areas
+		 * anyway, so we are not giving up any protection.
+		 */
+		asan_set_zone(aligned_start, aligned_len, 0x00);
+
+		/*
+		 * ...however, when we unmap pages, we create new free elements
+		 * which might have been marked as "freed" with an earlier
+		 * `asan_set_freezone` call. So, if there is an area past the
+		 * unmapped space that was marked as freezone for ASan, we need
+		 * to mark the malloc header as available.
+		 */
+		if (asan_end > aligned_end)
+			asan_set_zone(aligned_end, MALLOC_ELEM_HEADER_LEN, 0x00);
+
+		/* if there's space before unmapped memory, mark as available */
+		if (asan_ptr < aligned_start)
+			asan_set_zone(aligned_trailer, MALLOC_ELEM_TRAILER_LEN, 0x00);
+	}
+
 	rte_spinlock_unlock(&(heap->lock));
 	return ret;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 1/1] malloc: fix ASan handling for unmapped memory
  2022-05-04 14:31 ` [PATCH v2 " Anatoly Burakov
@ 2022-05-05  9:10   ` David Marchand
  0 siblings, 0 replies; 3+ messages in thread
From: David Marchand @ 2022-05-05  9:10 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev, Xueqin Lin, Vladimir Medvedkin, dpdk stable

On Wed, May 4, 2022 at 4:32 PM Anatoly Burakov
<anatoly.burakov@intel.com> wrote:
>
> Currently, when we free previously allocated memory, we mark the area as
> "freed" for ASan purposes (flag 0xfd). However, sometimes, freeing a
> malloc element will cause pages to be unmapped from memory and re-backed
> with anonymous memory again. This may cause ASan's "use-after-free"
> error down the line, because the allocator will try to write into
> memory areas recently marked as "freed".
>
> To fix this, we need to mark the unmapped memory area as "available",
> and fixup surrounding malloc element header/trailers to enable later
> malloc routines to safely write into new malloc elements' headers or
> trailers.

Bugzilla ID: 994
> Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan")
> Cc: stable@dpdk.org
>

Reported-by: David Marchand <david.marchand@redhat.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>

It fixes the issues I saw with unit tests.
Applied, thanks for working on this problem.


I'll respin my series that enables ASan in GHA.

-- 
David marchand


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-05-05  9:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-04 13:40 [PATCH v1 1/1] malloc: fix ASan handling for unmapped memory Anatoly Burakov
2022-05-04 14:31 ` [PATCH v2 " Anatoly Burakov
2022-05-05  9:10   ` David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).