From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD747A00C4 for ; Wed, 4 May 2022 15:40:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E6EE4069F; Wed, 4 May 2022 15:40:59 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 6FEA84069F; Wed, 4 May 2022 15:40:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651671657; x=1683207657; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=farH/r1B79ULPbhgHAPsXBQf+pSGjtAWQzdJVuYaQEI=; b=NJN9PE2SDNeOL6f9AAJHqyytVI6RE5qeVdFWEfSCD908dJykraJRw8lz 1yasVC0yxA31ad9OtFpdEE+oGg0vrovMDCK4gpsYJqWyL/3+xrTY2D2zu qOaaXVlkvoTUAXwEViOWsNSzziKz7ZDigsSvP8wxEy3/TOdkRNajKM9Ge T15atMez/Yhv6PPbQMSktpykUIpoc92UL+8vKjwbHwYYgQycY16Z220Yp jLuNDUpY2n0VlmIF3DeJV2DvWcHoLVi6Nmaw3e6iSKVu5i38JrUwQzLla DFUa/5XfFq61SLFcCnc3mu2A0melIJ1ToFJoC54hWDhHDILle10NygX9Z g==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="265356457" X-IronPort-AV: E=Sophos;i="5.91,198,1647327600"; d="scan'208";a="265356457" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 06:40:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,198,1647327600"; d="scan'208";a="664474840" Received: from silpixa00401191.ir.intel.com ([10.55.128.75]) by fmsmga002.fm.intel.com with ESMTP; 04 May 2022 06:40:54 -0700 From: Anatoly Burakov To: dev@dpdk.org, Xueqin Lin , Zhihong Peng Cc: david.marchand@redhat.com, vladimir.medvedkin@intel.com, stable@dpdk.org Subject: [PATCH v1 1/1] malloc: fix ASan handling for unmapped memory Date: Wed, 4 May 2022 13:40:53 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Currently, when we free previously allocated memory, we mark the area as "freed" for ASan purposes (flag 0xfd). However, sometimes, freeing a malloc element will cause pages to be unmapped from memory and re-backed with anonymous memory again. This may cause ASan's "use-after-free" error down the line, because the allocator will try to write into memory areas recently marked as "freed". To fix this, we need to mark the unmapped memory area as "available", and fixup surrounding malloc element header/trailers to enable later malloc routines to safely write into new malloc elements' headers or trailers. Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan") Cc: zhihongx.peng@intel.com Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov --- lib/eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6c572b6f2c..a3d26fcbea 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -861,6 +861,7 @@ malloc_heap_free(struct malloc_elem *elem) struct rte_memseg_list *msl; unsigned int i, n_segs, before_space, after_space; int ret; + bool unmapped = false; const struct internal_config *internal_conf = eal_get_internal_configuration(); @@ -1027,6 +1028,9 @@ malloc_heap_free(struct malloc_elem *elem) request_to_primary(&req); } + /* we didn't exit early, meaning we have unmapped some pages */ + unmapped = true; + RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n", msl->socket_id, aligned_len >> 20ULL); @@ -1034,6 +1038,37 @@ malloc_heap_free(struct malloc_elem *elem) free_unlock: asan_set_freezone(asan_ptr, asan_data_len); + /* if we unmapped some memory, we need to do additional work for ASan */ + if (unmapped) { + void *asan_end = RTE_PTR_ADD(asan_ptr, asan_data_len); + void *aligned_end = RTE_PTR_ADD(aligned_start, aligned_len); + void *aligned_trailer = RTE_PTR_SUB(aligned_start, + MALLOC_ELEM_TRAILER_LEN); + + /* + * There was a memory area that was unmapped. This memory area + * will have to be marked as available for ASan, because we will + * want to use it next time it gets mapped again. The OS memory + * protection should trigger a fault on access to these areas + * anyway, so we are not giving up any protection. + */ + asan_set_zone(aligned_start, aligned_len, 0x00); + + /* + * ...however, when we unmap pages, we create new free elements + * which might have been marked as "freed" with an earlier + * `asan_set_freezone` call. So, if there is an area past the + * unmapped space that was marked as freezone for ASan, we need + * to mark the malloc header as available. + */ + if (asan_end > aligned_end) + asan_set_zone(aligned_end, MALLOC_ELEM_HEADER_LEN, 0x00); + + /* if there's space before unmapped memory, mark as available */ + if (asan_ptr < aligned_start) + asan_set_zone(aligned_trailer, MALLOC_ELEM_TRAILER_LEN, 0x00); + } + rte_spinlock_unlock(&(heap->lock)); return ret; } -- 2.25.1