From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 049A642EA8; Tue, 18 Jul 2023 15:55:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 744A642D81; Tue, 18 Jul 2023 15:54:35 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id 05A0842D52 for ; Tue, 18 Jul 2023 15:54:27 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R50m62lWWz18LnP; Tue, 18 Jul 2023 21:53:42 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Tue, 18 Jul 2023 21:54:24 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v18 3/6] memarea: support alloc and free API Date: Tue, 18 Jul 2023 13:46:07 +0000 Message-ID: <20230718134610.32836-4-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230718134610.32836-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230718134610.32836-1-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_alloc() and rte_memarea_free() API. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup Acked-by: Anatoly Burakov --- doc/guides/prog_guide/memarea_lib.rst | 6 + lib/memarea/memarea_private.h | 10 ++ lib/memarea/rte_memarea.c | 159 ++++++++++++++++++++++++++ lib/memarea/rte_memarea.h | 46 ++++++++ lib/memarea/version.map | 2 + 5 files changed, 223 insertions(+) diff --git a/doc/guides/prog_guide/memarea_lib.rst b/doc/guides/prog_guide/memarea_lib.rst index bf19090294..157baf3c7e 100644 --- a/doc/guides/prog_guide/memarea_lib.rst +++ b/doc/guides/prog_guide/memarea_lib.rst @@ -33,6 +33,12 @@ returns the pointer to the created memarea or ``NULL`` if the creation failed. The ``rte_memarea_destroy()`` function is used to destroy a memarea. +The ``rte_memarea_alloc()`` function is used to alloc one memory object from +the memarea. + +The ``rte_memarea_free()`` function is used to free one memory object which +allocated by ``rte_memarea_alloc()``. + Debug Mode ---------- diff --git a/lib/memarea/memarea_private.h b/lib/memarea/memarea_private.h index fd485bb7e7..ab6253294e 100644 --- a/lib/memarea/memarea_private.h +++ b/lib/memarea/memarea_private.h @@ -52,10 +52,20 @@ enum { #define MEMAREA_OBJECT_GET_SIZE(hdr) \ ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ sizeof(struct memarea_objhdr) - sizeof(struct memarea_objtlr)) +#define MEMAREA_SPLIT_OBJECT_MIN_SIZE \ + (sizeof(struct memarea_objhdr) + MEMAREA_OBJECT_SIZE_ALIGN + \ + sizeof(struct memarea_objtlr)) +#define MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz) \ + RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr) + alloc_sz + \ + sizeof(struct memarea_objtlr)) #else #define MEMAREA_OBJECT_GET_SIZE(hdr) \ ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ sizeof(struct memarea_objhdr)) +#define MEMAREA_SPLIT_OBJECT_MIN_SIZE \ + (sizeof(struct memarea_objhdr) + MEMAREA_OBJECT_SIZE_ALIGN) +#define MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz) \ + RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr) + alloc_sz) #endif struct memarea_objhdr { diff --git a/lib/memarea/rte_memarea.c b/lib/memarea/rte_memarea.c index ea9067fb35..0c538b54ba 100644 --- a/lib/memarea/rte_memarea.c +++ b/lib/memarea/rte_memarea.c @@ -2,8 +2,10 @@ * Copyright(c) 2023 HiSilicon Limited */ +#include #include #include +#include #include #include @@ -94,6 +96,8 @@ memarea_alloc_area(const struct rte_memarea_param *init) init->heap.socket_id); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) ptr = memarea_alloc_from_libc(init->total_sz); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + ptr = rte_memarea_alloc(init->ma.src, init->total_sz); return ptr; } @@ -105,6 +109,8 @@ memarea_free_area(const struct rte_memarea_param *init, void *ptr) rte_free(ptr); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) free(ptr); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + rte_memarea_free(init->ma.src, ptr); } static inline void @@ -206,3 +212,156 @@ rte_memarea_destroy(struct rte_memarea *ma) memarea_free_area(&ma->init, ma->area_base); rte_free(ma); } + +static inline void +memarea_lock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_lock(&ma->lock); +} + +static inline void +memarea_unlock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_unlock(&ma->lock); +} + +static inline void +memarea_check_cookie(const struct rte_memarea *ma, const struct memarea_objhdr *hdr, int status) +{ +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + static const char *const str[] = { "PASS", "FAILED" }; + struct memarea_objtlr *tlr; + bool hdr_fail, tlr_fail; + + if (hdr == ma->guard_hdr) + return; + + tlr = RTE_PTR_SUB(TAILQ_NEXT(hdr, obj_next), sizeof(struct memarea_objtlr)); + hdr_fail = (status == COOKIE_EXPECT_STATUS_AVAILABLE && + hdr->cookie != MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE) || + (status == COOKIE_EXPECT_STATUS_ALLOCATED && + hdr->cookie != MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE) || + (status == COOKIE_EXPECT_STATUS_VALID && + (hdr->cookie != MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE && + hdr->cookie != MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE)); + tlr_fail = (tlr->cookie != MEMAREA_OBJECT_TRAILER_COOKIE); + if (!hdr_fail && !tlr_fail) + return; + + rte_panic("MEMAREA: %s check cookies failed! addr-%p header-cookie<0x%" PRIx64 " %s> trailer-cookie<0x%" PRIx64 " %s>\n", + ma->init.name, RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)), + hdr->cookie, str[hdr_fail], tlr->cookie, str[tlr_fail]); +#else + RTE_SET_USED(ma); + RTE_SET_USED(hdr); + RTE_SET_USED(status); +#endif +} + +static inline void +memarea_split_object(struct rte_memarea *ma, struct memarea_objhdr *hdr, size_t alloc_sz) +{ + struct memarea_objhdr *split_hdr; + + split_hdr = MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz); + memarea_set_cookie(split_hdr, COOKIE_TARGET_STATUS_NEW_AVAILABLE); + TAILQ_INSERT_AFTER(&ma->obj_list, hdr, split_hdr, obj_next); + TAILQ_INSERT_AFTER(&ma->avail_list, hdr, split_hdr, avail_next); +} + +void * +rte_memarea_alloc(struct rte_memarea *ma, size_t size) +{ + size_t align_sz = RTE_ALIGN(size, MEMAREA_OBJECT_SIZE_ALIGN); + struct memarea_objhdr *hdr; + size_t avail_sz; + void *ptr = NULL; + + if (ma == NULL || size == 0 || align_sz < size) { + rte_errno = EINVAL; + return ptr; + } + + memarea_lock(ma); + + /** traverse every available object, return the first satisfied one. */ + TAILQ_FOREACH(hdr, &ma->avail_list, avail_next) { + /** 1st: check whether the object size meets. */ + memarea_check_cookie(ma, hdr, COOKIE_EXPECT_STATUS_AVAILABLE); + avail_sz = MEMAREA_OBJECT_GET_SIZE(hdr); + if (avail_sz < align_sz) + continue; + + /** 2nd: if the object size is too long, a new object can be split. */ + if (avail_sz - align_sz > MEMAREA_SPLIT_OBJECT_MIN_SIZE) + memarea_split_object(ma, hdr, align_sz); + + /** 3rd: allocate successful. */ + TAILQ_REMOVE(&ma->avail_list, hdr, avail_next); + MEMAREA_OBJECT_MARK_ALLOCATED(hdr); + memarea_set_cookie(hdr, COOKIE_TARGET_STATUS_ALLOCATED); + + ptr = RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)); + break; + } + + memarea_unlock(ma); + + if (ptr == NULL) + rte_errno = ENOMEM; + return ptr; +} + +static inline void +memarea_merge_object(struct rte_memarea *ma, struct memarea_objhdr *curr, + struct memarea_objhdr *next) +{ + RTE_SET_USED(curr); + TAILQ_REMOVE(&ma->obj_list, next, obj_next); + TAILQ_REMOVE(&ma->avail_list, next, avail_next); + memarea_set_cookie(next, COOKIE_TARGET_STATUS_CLEARED); +} + +void +rte_memarea_free(struct rte_memarea *ma, void *ptr) +{ + struct memarea_objhdr *hdr, *prev, *next; + + if (ma == NULL || ptr == NULL) { + rte_errno = EINVAL; + return; + } + + hdr = RTE_PTR_SUB(ptr, sizeof(struct memarea_objhdr)); + if (!MEMAREA_OBJECT_IS_ALLOCATED(hdr)) { + RTE_MEMAREA_LOG(ERR, "detect invalid object in %s!", ma->init.name); + rte_errno = EFAULT; + return; + } + memarea_check_cookie(ma, hdr, COOKIE_EXPECT_STATUS_ALLOCATED); + + memarea_lock(ma); + + /** 1st: add to avail list. */ + TAILQ_INSERT_HEAD(&ma->avail_list, hdr, avail_next); + memarea_set_cookie(hdr, COOKIE_TARGET_STATUS_AVAILABLE); + + /** 2nd: merge if previous object is avail. */ + prev = TAILQ_PREV(hdr, memarea_objhdr_list, obj_next); + if (prev != NULL && !MEMAREA_OBJECT_IS_ALLOCATED(prev)) { + memarea_check_cookie(ma, prev, COOKIE_EXPECT_STATUS_AVAILABLE); + memarea_merge_object(ma, prev, hdr); + hdr = prev; + } + + /** 3rd: merge if next object is avail. */ + next = TAILQ_NEXT(hdr, obj_next); + if (next != NULL && !MEMAREA_OBJECT_IS_ALLOCATED(next)) { + memarea_check_cookie(ma, next, COOKIE_EXPECT_STATUS_AVAILABLE); + memarea_merge_object(ma, hdr, next); + } + + memarea_unlock(ma); +} diff --git a/lib/memarea/rte_memarea.h b/lib/memarea/rte_memarea.h index 1d4381efd7..bb1bd5bae5 100644 --- a/lib/memarea/rte_memarea.h +++ b/lib/memarea/rte_memarea.h @@ -134,6 +134,52 @@ struct rte_memarea *rte_memarea_create(const struct rte_memarea_param *init); __rte_experimental void rte_memarea_destroy(struct rte_memarea *ma); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate memory from memarea. + * + * Allocate one memory object from the memarea. + * + * @param ma + * The pointer of memarea. + * @param size + * The memory size to be allocated. + * + * @return + * - NULL on error. Not enough memory, or invalid arguments (see the + * rte_errno). + * - Otherwise, the pointer to the allocated object. + * + * @note The memory allocated is not guaranteed to be zeroed. + */ +__rte_experimental +void *rte_memarea_alloc(struct rte_memarea *ma, size_t size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Free memory to memarea. + * + * Free one memory object to the memarea. + * @note The memory object must have been returned by a previous call to + * rte_memarea_alloc(), it must be freed to the same memarea which previous + * allocated from. The behaviour of rte_memarea_free() is undefined if the + * memarea or pointer does not match these requirements. + * + * @param ma + * The pointer of memarea. If the ma is NULL, the function does nothing. + * @param ptr + * The pointer of memory object which need be freed. If the pointer is NULL, + * the function does nothing. + * + * @note The rte_errno is set if free failed. + */ +__rte_experimental +void rte_memarea_free(struct rte_memarea *ma, void *ptr); + #ifdef __cplusplus } #endif diff --git a/lib/memarea/version.map b/lib/memarea/version.map index f36a04d7cf..effbd0b488 100644 --- a/lib/memarea/version.map +++ b/lib/memarea/version.map @@ -1,8 +1,10 @@ EXPERIMENTAL { global: + rte_memarea_alloc; rte_memarea_create; rte_memarea_destroy; + rte_memarea_free; local: *; }; -- 2.17.1