From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7AA1742E38; Mon, 10 Jul 2023 08:57:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E3D842BC9; Mon, 10 Jul 2023 08:57:34 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id C7603410EF for ; Mon, 10 Jul 2023 08:57:30 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QzvtB4w8vzTm9p; Mon, 10 Jul 2023 14:56:18 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 10 Jul 2023 14:57:28 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v16 3/6] memarea: support alloc and free API Date: Mon, 10 Jul 2023 06:49:20 +0000 Message-ID: <20230710064923.19849-4-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230710064923.19849-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230710064923.19849-1-fengchengwen@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_alloc() and rte_memarea_free() API. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup --- doc/guides/prog_guide/memarea_lib.rst | 6 + lib/memarea/memarea_private.h | 10 ++ lib/memarea/rte_memarea.c | 164 ++++++++++++++++++++++++++ lib/memarea/rte_memarea.h | 44 +++++++ lib/memarea/version.map | 2 + 5 files changed, 226 insertions(+) diff --git a/doc/guides/prog_guide/memarea_lib.rst b/doc/guides/prog_guide/memarea_lib.rst index bf19090294..157baf3c7e 100644 --- a/doc/guides/prog_guide/memarea_lib.rst +++ b/doc/guides/prog_guide/memarea_lib.rst @@ -33,6 +33,12 @@ returns the pointer to the created memarea or ``NULL`` if the creation failed. The ``rte_memarea_destroy()`` function is used to destroy a memarea. +The ``rte_memarea_alloc()`` function is used to alloc one memory object from +the memarea. + +The ``rte_memarea_free()`` function is used to free one memory object which +allocated by ``rte_memarea_alloc()``. + Debug Mode ---------- diff --git a/lib/memarea/memarea_private.h b/lib/memarea/memarea_private.h index 384c6dde9d..cef7d0f859 100644 --- a/lib/memarea/memarea_private.h +++ b/lib/memarea/memarea_private.h @@ -22,10 +22,20 @@ #define MEMAREA_OBJECT_GET_SIZE(hdr) \ ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ sizeof(struct memarea_objhdr) - sizeof(struct memarea_objtlr)) +#define MEMAREA_SPLIT_OBJECT_MIN_SIZE \ + (sizeof(struct memarea_objhdr) + MEMAREA_OBJECT_SIZE_ALIGN + \ + sizeof(struct memarea_objtlr)) +#define MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz) \ + RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr) + alloc_sz + \ + sizeof(struct memarea_objtlr)) #else #define MEMAREA_OBJECT_GET_SIZE(hdr) \ ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ sizeof(struct memarea_objhdr)) +#define MEMAREA_SPLIT_OBJECT_MIN_SIZE \ + (sizeof(struct memarea_objhdr) + MEMAREA_OBJECT_SIZE_ALIGN) +#define MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz) \ + RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr) + alloc_sz) #endif struct memarea_objhdr { diff --git a/lib/memarea/rte_memarea.c b/lib/memarea/rte_memarea.c index 69ffeac4e4..d941e64a1e 100644 --- a/lib/memarea/rte_memarea.c +++ b/lib/memarea/rte_memarea.c @@ -2,8 +2,10 @@ * Copyright(c) 2023 HiSilicon Limited */ +#include #include #include +#include #include #include @@ -94,6 +96,8 @@ memarea_alloc_area(const struct rte_memarea_param *init) init->heap.socket_id); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) ptr = memarea_alloc_from_libc(init->total_sz); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + ptr = rte_memarea_alloc(init->ma.src, init->total_sz); return ptr; } @@ -105,6 +109,8 @@ memarea_free_area(const struct rte_memarea_param *init, void *ptr) rte_free(ptr); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) free(ptr); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + rte_memarea_free(init->ma.src, ptr); } /** @@ -219,3 +225,161 @@ rte_memarea_destroy(struct rte_memarea *ma) memarea_free_area(&ma->init, ma->area_base); rte_free(ma); } + +static inline void +memarea_lock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_lock(&ma->lock); +} + +static inline void +memarea_unlock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_unlock(&ma->lock); +} + +/** + * Check cookie or panic. + * + * @param status + * - 0: object is supposed to be available. + * - 1: object is supposed to be allocated. + * - 2: just check that cookie is valid (available or allocated). + */ +static inline void +memarea_check_cookie(const struct rte_memarea *ma, const struct memarea_objhdr *hdr, int status) +{ +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + static const char *const str[] = { "PASS", "FAILED" }; + struct memarea_objtlr *tlr; + bool hdr_fail, tlr_fail; + + if (hdr == ma->guard_hdr) + return; + + tlr = RTE_PTR_SUB(TAILQ_NEXT(hdr, obj_next), sizeof(struct memarea_objtlr)); + hdr_fail = (status == 0 && hdr->cookie != MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE) || + (status == 1 && hdr->cookie != MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE) || + (status == 2 && (hdr->cookie != MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE && + hdr->cookie != MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE)); + tlr_fail = (tlr->cookie != MEMAREA_OBJECT_TRAILER_COOKIE); + if (!hdr_fail && !tlr_fail) + return; + + rte_panic("MEMAREA: %s check cookies failed! addr-%p header-cookie<0x%" PRIx64 " %s> trailer-cookie<0x%" PRIx64 " %s>\n", + ma->init.name, RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)), + hdr->cookie, str[hdr_fail], tlr->cookie, str[tlr_fail]); +#else + RTE_SET_USED(ma); + RTE_SET_USED(hdr); + RTE_SET_USED(status); +#endif +} + +static inline void +memarea_split_object(struct rte_memarea *ma, struct memarea_objhdr *hdr, size_t alloc_sz) +{ + struct memarea_objhdr *split_hdr; + + split_hdr = MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz); + memarea_set_cookie(split_hdr, 2); + TAILQ_INSERT_AFTER(&ma->obj_list, hdr, split_hdr, obj_next); + TAILQ_INSERT_AFTER(&ma->avail_list, hdr, split_hdr, avail_next); +} + +void * +rte_memarea_alloc(struct rte_memarea *ma, size_t size) +{ + size_t align_sz = RTE_ALIGN(size, MEMAREA_OBJECT_SIZE_ALIGN); + struct memarea_objhdr *hdr; + size_t avail_sz; + void *ptr = NULL; + + if (ma == NULL || size == 0 || align_sz < size) { + rte_errno = EINVAL; + return ptr; + } + + memarea_lock(ma); + + /** traverse every available object, return the first satisfied one. */ + TAILQ_FOREACH(hdr, &ma->avail_list, avail_next) { + /** 1st: check whether the object size meets. */ + memarea_check_cookie(ma, hdr, 0); + avail_sz = MEMAREA_OBJECT_GET_SIZE(hdr); + if (avail_sz < align_sz) + continue; + + /** 2nd: if the object size is too long, a new object can be split. */ + if (avail_sz - align_sz > MEMAREA_SPLIT_OBJECT_MIN_SIZE) + memarea_split_object(ma, hdr, align_sz); + + /** 3rd: allocate successful. */ + TAILQ_REMOVE(&ma->avail_list, hdr, avail_next); + MEMAREA_OBJECT_MARK_ALLOCATED(hdr); + memarea_set_cookie(hdr, 1); + + ptr = RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)); + break; + } + + memarea_unlock(ma); + + if (ptr == NULL) + rte_errno = ENOMEM; + return ptr; +} + +static inline void +memarea_merge_object(struct rte_memarea *ma, struct memarea_objhdr *curr, + struct memarea_objhdr *next) +{ + RTE_SET_USED(curr); + TAILQ_REMOVE(&ma->obj_list, next, obj_next); + TAILQ_REMOVE(&ma->avail_list, next, avail_next); + memarea_set_cookie(next, 4); +} + +void +rte_memarea_free(struct rte_memarea *ma, void *ptr) +{ + struct memarea_objhdr *hdr, *prev, *next; + + if (ma == NULL || ptr == NULL) { + rte_errno = EINVAL; + return; + } + + hdr = RTE_PTR_SUB(ptr, sizeof(struct memarea_objhdr)); + if (!MEMAREA_OBJECT_IS_ALLOCATED(hdr)) { + RTE_MEMAREA_LOG(ERR, "detect invalid object in %s!", ma->init.name); + rte_errno = EFAULT; + return; + } + memarea_check_cookie(ma, hdr, 1); + + memarea_lock(ma); + + /** 1st: add to avail list. */ + TAILQ_INSERT_HEAD(&ma->avail_list, hdr, avail_next); + memarea_set_cookie(hdr, 0); + + /** 2nd: merge if previous object is avail. */ + prev = TAILQ_PREV(hdr, memarea_objhdr_list, obj_next); + if (prev != NULL && !MEMAREA_OBJECT_IS_ALLOCATED(prev)) { + memarea_check_cookie(ma, prev, 0); + memarea_merge_object(ma, prev, hdr); + hdr = prev; + } + + /** 3rd: merge if next object is avail. */ + next = TAILQ_NEXT(hdr, obj_next); + if (next != NULL && !MEMAREA_OBJECT_IS_ALLOCATED(next)) { + memarea_check_cookie(ma, next, 0); + memarea_merge_object(ma, hdr, next); + } + + memarea_unlock(ma); +} diff --git a/lib/memarea/rte_memarea.h b/lib/memarea/rte_memarea.h index 1d4381efd7..f771fcaf68 100644 --- a/lib/memarea/rte_memarea.h +++ b/lib/memarea/rte_memarea.h @@ -134,6 +134,50 @@ struct rte_memarea *rte_memarea_create(const struct rte_memarea_param *init); __rte_experimental void rte_memarea_destroy(struct rte_memarea *ma); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate memory from memarea. + * + * Allocate one memory object from the memarea. + * + * @param ma + * The pointer of memarea. + * @param size + * The memory size to be allocated. + * + * @return + * - NULL on error. Not enough memory, or invalid arguments (see the + * rte_errno). + * - Otherwise, the pointer to the allocated object. + */ +__rte_experimental +void *rte_memarea_alloc(struct rte_memarea *ma, size_t size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Free memory to memarea. + * + * Free one memory object to the memarea. + * @note The memory object must have been returned by a previous call to + * rte_memarea_alloc(), it must be freed to the same memarea which previous + * allocated from. The behaviour of rte_memarea_free() is undefined if the + * memarea or pointer does not match these requirements. + * + * @param ma + * The pointer of memarea. If the ma is NULL, the function does nothing. + * @param ptr + * The pointer of memory object which need be freed. If the pointer is NULL, + * the function does nothing. + * + * @note The rte_errno is set if free failed. + */ +__rte_experimental +void rte_memarea_free(struct rte_memarea *ma, void *ptr); + #ifdef __cplusplus } #endif diff --git a/lib/memarea/version.map b/lib/memarea/version.map index f36a04d7cf..effbd0b488 100644 --- a/lib/memarea/version.map +++ b/lib/memarea/version.map @@ -1,8 +1,10 @@ EXPERIMENTAL { global: + rte_memarea_alloc; rte_memarea_create; rte_memarea_destroy; + rte_memarea_free; local: *; }; -- 2.17.1