From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 996A93977 for ; Thu, 14 Apr 2016 12:20:25 +0200 (CEST) Received: from glumotte.dev.6wind.com (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 0F04B28F4E; Thu, 14 Apr 2016 12:19:42 +0200 (CEST) From: Olivier Matz To: dev@dpdk.org Cc: bruce.richardson@intel.com, stephen@networkplumber.org Date: Thu, 14 Apr 2016 12:19:54 +0200 Message-Id: <1460629199-32489-32-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1460629199-32489-1-git-send-email-olivier.matz@6wind.com> References: <1457540381-20274-1-git-send-email-olivier.matz@6wind.com> <1460629199-32489-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [PATCH 31/36] mempool: make mempool populate and free api public X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Apr 2016 10:20:26 -0000 Add the following functions to the public mempool API: - rte_mempool_create_empty() - rte_mempool_populate_phys() - rte_mempool_populate_phys_tab() - rte_mempool_populate_virt() - rte_mempool_populate_default() - rte_mempool_populate_anon() - rte_mempool_free() Signed-off-by: Olivier Matz --- lib/librte_mempool/rte_mempool.c | 14 +-- lib/librte_mempool/rte_mempool.h | 168 +++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool_version.map | 9 +- 3 files changed, 183 insertions(+), 8 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 5c21f08..4850f5d 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -365,7 +365,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp) /* Add objects in the pool, using a physically contiguous memory * zone. Return the number of objects added, or a negative value * on error. */ -static int +int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr, phys_addr_t paddr, size_t len, rte_mempool_memchunk_free_cb_t *free_cb, void *opaque) @@ -423,7 +423,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr, /* Add objects in the pool, using a table of physical pages. Return the * number of objects added, or a negative value on error. */ -static int +int rte_mempool_populate_phys_tab(struct rte_mempool *mp, char *vaddr, const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift, rte_mempool_memchunk_free_cb_t *free_cb, void *opaque) @@ -458,7 +458,7 @@ rte_mempool_populate_phys_tab(struct rte_mempool *mp, char *vaddr, /* Populate the mempool with a virtual area. Return the number of * objects added, or a negative value on error. */ -static int +int rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb, void *opaque) @@ -518,7 +518,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. */ -static int +int rte_mempool_populate_default(struct rte_mempool *mp) { int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY; @@ -609,7 +609,7 @@ rte_mempool_memchunk_anon_free(struct rte_mempool_memhdr *memhdr, } /* populate the mempool with an anonymous mapping */ -__rte_unused static int +int rte_mempool_populate_anon(struct rte_mempool *mp) { size_t size; @@ -650,7 +650,7 @@ rte_mempool_populate_anon(struct rte_mempool *mp) } /* free a mempool */ -static void +void rte_mempool_free(struct rte_mempool *mp) { struct rte_mempool_list *mempool_list = NULL; @@ -679,7 +679,7 @@ rte_mempool_free(struct rte_mempool *mp) } /* create an empty mempool */ -static struct rte_mempool * +struct rte_mempool * rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, unsigned cache_size, unsigned private_data_size, int socket_id, unsigned flags) diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 721d8e7..fe4e6fd 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -502,6 +502,174 @@ rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift); /** + * Create an empty mempool + * + * The mempool is allocated and initialized, but it is not populated: no + * memory is allocated for the mempool elements. The user has to call + * rte_mempool_populate_*() or to add memory chunks to the pool. Once + * populated, the user may also want to initialize each object with + * rte_mempool_obj_iter(). + * + * @param name + * The name of the mempool. + * @param n + * The maximum number of elements that can be added in the mempool. + * The optimum size (in terms of memory usage) for a mempool is when n + * is a power of two minus one: n = (2^q - 1). + * @param elt_size + * The size of each element. + * @param cache_size + * Size of the cache. See rte_mempool_create() for details. + * @param private_data_size + * The size of the private data appended after the mempool + * structure. This is useful for storing some private data after the + * mempool structure, as is done for rte_mbuf_pool for example. + * @param socket_id + * The *socket_id* argument is the socket identifier in the case of + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA + * constraint for the reserved zone. + * @param flags + * Flags controlling the behavior of the mempool. See + * rte_mempool_create() for details. + * @return + * The pointer to the new allocated mempool, on success. NULL on error + * with rte_errno set appropriately. See rte_mempool_create() for details. + */ +struct rte_mempool * +rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, + unsigned cache_size, unsigned private_data_size, + int socket_id, unsigned flags); +/** + * Free a mempool + * + * Unlink the mempool from global list, free the memory chunks, and all + * memory referenced by the mempool. The objects must not be used by + * other cores as they will be freed. + * + * @param mp + * A pointer to the mempool structure. + */ +void +rte_mempool_free(struct rte_mempool *mp); + +/** + * Add physically contiguous memory for objects in the pool at init + * + * Add a virtually and physically contiguous memory chunk in the pool + * where objects can be instanciated. + * + * @param mp + * A pointer to the mempool structure. + * @param vaddr + * The virtual address of memory that should be used to store objects. + * @param paddr + * The physical address + * @param len + * The length of memory in bytes. + * @param free_cb + * The callback used to free this chunk when destroying the mempool. + * @param opaque + * An opaque argument passed to free_cb. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr, + phys_addr_t paddr, size_t len, rte_mempool_memchunk_free_cb_t *free_cb, + void *opaque); + +/** + * Add physical memory for objects in the pool at init + * + * Add a virtually contiguous memory chunk in the pool where objects can + * be instanciated. The physical addresses corresponding to the virtual + * area are described in paddr[], pg_num, pg_shift. + * + * @param mp + * A pointer to the mempool structure. + * @param vaddr + * The virtual address of memory that should be used to store objects. + * @param paddr + * An array of physical addresses of each page composing the virtual + * area. + * @param pg_num + * Number of elements in the paddr array. + * @param pg_shift + * LOG2 of the physical pages size. + * @param free_cb + * The callback used to free this chunk when destroying the mempool. + * @param opaque + * An opaque argument passed to free_cb. + * @return + * The number of objects added on success. + * On error, the chunks are not added in the memory list of the + * mempool and a negative errno is returned. + */ +int rte_mempool_populate_phys_tab(struct rte_mempool *mp, char *vaddr, + const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift, + rte_mempool_memchunk_free_cb_t *free_cb, void *opaque); + +/** + * Add virtually contiguous memory for objects in the pool at init + * + * Add a virtually contiguous memory chunk in the pool where objects can + * be instanciated. + * + * @param mp + * A pointer to the mempool structure. + * @param addr + * The virtual address of memory that should be used to store objects. + * Must be page-aligned. + * @param len + * The length of memory in bytes. Must be page-aligned. + * @param pg_sz + * The size of memory pages in this virtual area. + * @param free_cb + * The callback used to free this chunk when destroying the mempool. + * @param opaque + * An opaque argument passed to free_cb. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +int +rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, + size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb, + void *opaque); + +/** + * Add memory for objects in the pool at init + * + * This is the default function used by rte_mempool_create() to populate + * the mempool. It adds memory allocated using rte_memzone_reserve(). + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +int rte_mempool_populate_default(struct rte_mempool *mp); + +/** + * Add memory from anonymous mapping for objects in the pool at init + * + * This function mmap an anonymous memory zone that is locked in + * memory to store the objects of the mempool. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +int rte_mempool_populate_anon(struct rte_mempool *mp); + +/** * Call a function for each mempool element * * Iterate across all objects attached to a rte_mempool and call the diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index c4f2da0..7d1f670 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -16,11 +16,18 @@ DPDK_2.0 { local: *; }; -DPDK_16.07 { +DPDK_16.7 { global: rte_mempool_obj_iter; rte_mempool_mem_iter; + rte_mempool_create_empty; + rte_mempool_populate_phys; + rte_mempool_populate_phys_tab; + rte_mempool_populate_virt; + rte_mempool_populate_default; + rte_mempool_populate_anon; + rte_mempool_free; local: *; } DPDK_2.0; -- 2.1.4