From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C4642423D4; Sat, 14 Jan 2023 12:50:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12B7540E25; Sat, 14 Jan 2023 12:50:12 +0100 (CET) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 0A13540156 for ; Sat, 14 Jan 2023 12:50:09 +0100 (CET) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NvGlM3gxrzJrSk; Sat, 14 Jan 2023 19:48:47 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sat, 14 Jan 2023 19:50:07 +0800 From: Chengwen Feng To: , , , , , , CC: , Subject: [PATCH v12 3/6] memarea: support alloc and free API Date: Sat, 14 Jan 2023 19:49:41 +0800 Message-ID: <20230114114944.42194-4-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230114114944.42194-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230114114944.42194-1-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_alloc() and rte_memarea_free() API. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu --- doc/guides/prog_guide/memarea_lib.rst | 6 ++ lib/memarea/memarea_private.h | 3 + lib/memarea/rte_memarea.c | 146 ++++++++++++++++++++++++++ lib/memarea/rte_memarea.h | 46 ++++++++ lib/memarea/version.map | 2 + 5 files changed, 203 insertions(+) diff --git a/doc/guides/prog_guide/memarea_lib.rst b/doc/guides/prog_guide/memarea_lib.rst index 156ff35cfd..01187f7ccb 100644 --- a/doc/guides/prog_guide/memarea_lib.rst +++ b/doc/guides/prog_guide/memarea_lib.rst @@ -31,6 +31,12 @@ returns the pointer to the created memarea or ``NULL`` if the creation failed. The ``rte_memarea_destroy()`` function is used to destroy a memarea. +The ``rte_memarea_alloc()`` function is used to alloc one memory object from +the memarea. + +The ``rte_memarea_free()`` function is used to free one memory object which +allocated by ``rte_memarea_alloc()``. + Reference --------- diff --git a/lib/memarea/memarea_private.h b/lib/memarea/memarea_private.h index 3d152ec780..509cbc7bc7 100644 --- a/lib/memarea/memarea_private.h +++ b/lib/memarea/memarea_private.h @@ -48,6 +48,9 @@ struct rte_memarea { void *area_addr; struct memarea_obj_list obj_list; struct memarea_obj_list free_list; + + uint64_t alloc_fails; + uint64_t magic_check_fails; } __rte_cache_aligned; #endif /* MEMAREA_PRIVATE_H */ diff --git a/lib/memarea/rte_memarea.c b/lib/memarea/rte_memarea.c index f3b3bdb09a..76f8083d96 100644 --- a/lib/memarea/rte_memarea.c +++ b/lib/memarea/rte_memarea.c @@ -2,8 +2,10 @@ * Copyright(c) 2023 HiSilicon Limited */ +#include #include #include +#include #include #include @@ -84,6 +86,8 @@ memarea_alloc_area(const struct rte_memarea_param *init) init->numa_socket); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) ptr = memarea_alloc_from_libc(init->total_sz); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + ptr = rte_memarea_alloc(init->src_memarea, init->total_sz, 0); return ptr; } @@ -95,6 +99,8 @@ memarea_free_area(const struct rte_memarea_param *init, void *ptr) rte_free(ptr); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) free(ptr); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + rte_memarea_free(init->src_memarea, ptr); } struct rte_memarea * @@ -156,3 +162,143 @@ rte_memarea_destroy(struct rte_memarea *ma) rte_free(ma); } +static inline void +memarea_lock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_lock(&ma->lock); +} + +static inline void +memarea_unlock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_unlock(&ma->lock); +} + +static inline uint32_t +memarea_calc_align_space(struct memarea_obj *obj, uint32_t align) +{ + if (align == 0) + return 0; + return align - (((uintptr_t)obj + sizeof(struct memarea_obj) + sizeof(uint32_t)) & + (align - 1)); +} + +static inline bool +memarea_whether_add_node(size_t obj_size, size_t need_size) +{ + return (obj_size - need_size) > sizeof(struct memarea_obj) + RTE_CACHE_LINE_SIZE; +} + +static inline void +memarea_add_node(struct rte_memarea *ma, struct memarea_obj *obj, size_t used_size) +{ + size_t align_size = RTE_ALIGN_CEIL(used_size, sizeof(void *)); + struct memarea_obj *new_obj; + + new_obj = (struct memarea_obj *)RTE_PTR_ADD(obj, sizeof(struct memarea_obj) + + align_size); + new_obj->size = obj->size - align_size - sizeof(struct memarea_obj); + new_obj->alloc_size = 0; + new_obj->magic = MEMAREA_OBJECT_FREE_MAGIC; + TAILQ_INSERT_AFTER(&ma->obj_list, obj, new_obj, obj_node); + TAILQ_INSERT_AFTER(&ma->free_list, obj, new_obj, free_node); + obj->size = align_size; +} + +void * +rte_memarea_alloc(struct rte_memarea *ma, size_t size, uint32_t align) +{ + size_t size_req = size + align + sizeof(uint32_t); /* use to check size overflow */ + struct memarea_obj *obj; + uint32_t align_space; + void *ptr = NULL; + + if (unlikely(ma == NULL || size == 0 || size_req < size || + (align && !rte_is_power_of_2(align)))) + return ptr; + + memarea_lock(ma); + TAILQ_FOREACH(obj, &ma->free_list, free_node) { + if (unlikely(obj->magic != MEMAREA_OBJECT_FREE_MAGIC)) { + ma->magic_check_fails++; + RTE_LOG(ERR, MEMAREA, "memarea: %s magic: 0x%x check fail when alloc object!\n", + ma->init.name, obj->magic); + break; + } + align_space = memarea_calc_align_space(obj, align); + if (obj->size < size + align_space) + continue; + if (memarea_whether_add_node(obj->size, size + align_space)) + memarea_add_node(ma, obj, size + align_space); + obj->alloc_size = size; + obj->magic = MEMAREA_OBJECT_ALLOCATED_MAGIC; + TAILQ_REMOVE(&ma->free_list, obj, free_node); + ptr = RTE_PTR_ADD(obj, sizeof(struct memarea_obj) + align_space + sizeof(uint32_t)); + *(uint32_t *)RTE_PTR_SUB(ptr, sizeof(uint32_t)) = (uintptr_t)ptr - (uintptr_t)obj; + break; + } + if (unlikely(ptr == NULL)) + ma->alloc_fails++; + memarea_unlock(ma); + + return ptr; +} + +static inline void +memarea_merge_node(struct rte_memarea *ma, struct memarea_obj *curr, + struct memarea_obj *next, bool del_next_from_free, + bool add_curr_to_free) +{ + curr->size += next->size + sizeof(struct memarea_obj); + next->alloc_size = 0; + next->magic = 0; + TAILQ_REMOVE(&ma->obj_list, next, obj_node); + if (del_next_from_free) + TAILQ_REMOVE(&ma->free_list, next, free_node); + if (add_curr_to_free) { + curr->alloc_size = 0; + curr->magic = MEMAREA_OBJECT_FREE_MAGIC; + TAILQ_INSERT_TAIL(&ma->free_list, curr, free_node); + } +} + +void +rte_memarea_free(struct rte_memarea *ma, void *ptr) +{ + struct memarea_obj *obj, *prev, *next; + bool merged = false; + uint32_t offset; + + if (unlikely(ma == NULL || ptr == NULL)) + return; + + offset = *(uint32_t *)RTE_PTR_SUB(ptr, sizeof(uint32_t)); + obj = (struct memarea_obj *)RTE_PTR_SUB(ptr, offset); + if (unlikely(obj->magic != MEMAREA_OBJECT_ALLOCATED_MAGIC)) { + ma->magic_check_fails++; + RTE_LOG(ERR, MEMAREA, "memarea: %s magic: 0x%x check fail when free object!\n", + ma->init.name, obj->magic); + return; + } + + memarea_lock(ma); + prev = TAILQ_PREV(obj, memarea_obj_list, obj_node); + next = TAILQ_NEXT(obj, obj_node); + if (prev != NULL && prev->magic == MEMAREA_OBJECT_FREE_MAGIC) { + memarea_merge_node(ma, prev, obj, false, false); + obj = prev; + merged = true; + } + if (next != NULL && next->magic == MEMAREA_OBJECT_FREE_MAGIC) { + memarea_merge_node(ma, obj, next, true, !merged); + merged = true; + } + if (!merged) { + obj->alloc_size = 0; + obj->magic = MEMAREA_OBJECT_FREE_MAGIC; + TAILQ_INSERT_TAIL(&ma->free_list, obj, free_node); + } + memarea_unlock(ma); +} diff --git a/lib/memarea/rte_memarea.h b/lib/memarea/rte_memarea.h index 9983308ae8..1e94685719 100644 --- a/lib/memarea/rte_memarea.h +++ b/lib/memarea/rte_memarea.h @@ -115,6 +115,52 @@ struct rte_memarea *rte_memarea_create(const struct rte_memarea_param *init); __rte_experimental void rte_memarea_destroy(struct rte_memarea *ma); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate memory from memarea. + * + * Allocate one memory object from the memarea. + * + * @param ma + * The pointer of memarea. + * @param size + * The memory size to be allocated. + * @param align + * If 0, the return is a pointer that is aligned for uint32_t variable. + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must be a power of two. + * + * @return + * - NULL on error. Not enough memory, or invalid arguments (ma is NULL, + * size is 0, align is non-zero and not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +__rte_experimental +void *rte_memarea_alloc(struct rte_memarea *ma, size_t size, uint32_t align); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Free memory to memarea. + * + * Free one memory object to the memarea. + * @note The memory object must have been returned by a previous call to + * rte_memarea_alloc(), if it is allocated from memarea-A, it must be freed to + * the same memarea-A. The behaviour of rte_memarea_free() is undefined if the + * memarea or pointer does not match these requirements. + * + * @param ma + * The pointer of memarea. If the ma is NULL, the function does nothing. + * @param ptr + * The pointer of memory object which need be freed. If the pointer is NULL, + * the function does nothing. + */ +__rte_experimental +void rte_memarea_free(struct rte_memarea *ma, void *ptr); + #ifdef __cplusplus } #endif diff --git a/lib/memarea/version.map b/lib/memarea/version.map index f36a04d7cf..effbd0b488 100644 --- a/lib/memarea/version.map +++ b/lib/memarea/version.map @@ -1,8 +1,10 @@ EXPERIMENTAL { global: + rte_memarea_alloc; rte_memarea_create; rte_memarea_destroy; + rte_memarea_free; local: *; }; -- 2.33.0