From: Slava Ovsiienko <viacheslavo@mellanox.com>
To: Vu Pham <vuhuong@mellanox.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: Ori Kam <orika@mellanox.com>, Matan Azrad <matan@mellanox.com>,
Raslan Darawsheh <rasland@mellanox.com>,
Vu Pham <vuhuong@mellanox.com>
Subject: Re: [dpdk-dev] [PATCH v3 3/4] common/mlx5: refactor memory management codes
Date: Wed, 8 Apr 2020 09:04:41 +0000 [thread overview]
Message-ID: <AM4PR05MB32652EE4B9324560E4170210D2C00@AM4PR05MB3265.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <20200407170058.9274-4-vuhuong@mellanox.com>
> -----Original Message-----
> From: Vu Pham <vuhuong@mellanox.com>
> Sent: Tuesday, April 7, 2020 20:01
> To: dev@dpdk.org
> Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Ori Kam
> <orika@mellanox.com>; Matan Azrad <matan@mellanox.com>; Raslan
> Darawsheh <rasland@mellanox.com>; Vu Pham <vuhuong@mellanox.com>
> Subject: [PATCH v3 3/4] common/mlx5: refactor memory management codes
>
> Refactor common memory btree and cache management to common driver.
> Replace some input parameters of MR APIs to more common datastructure
> like PD, port_id, share_cache,... so that muliptle PMD drivers can use those
> MR APIs.
>
> Signed-off-by: Vu Pham <vuhuong@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
> drivers/common/mlx5/mlx5_common_mr.c | 1108
> +++++++++++++++++++++++
> drivers/common/mlx5/mlx5_common_mr.h | 160 ++++
> drivers/common/mlx5/rte_common_mlx5_version.map | 14 +
> 3 files changed, 1282 insertions(+)
> create mode 100644 drivers/common/mlx5/mlx5_common_mr.c
> create mode 100644 drivers/common/mlx5/mlx5_common_mr.h
>
> diff --git a/drivers/common/mlx5/mlx5_common_mr.c
> b/drivers/common/mlx5/mlx5_common_mr.c
> new file mode 100644
> index 0000000000..9d4a06dd5b
> --- /dev/null
> +++ b/drivers/common/mlx5/mlx5_common_mr.c
> @@ -0,0 +1,1108 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2016 6WIND S.A.
> + * Copyright 2020 Mellanox Technologies, Ltd */ #include
> +<rte_eal_memconfig.h> #include <rte_errno.h> #include <rte_mempool.h>
> +#include <rte_malloc.h> #include <rte_rwlock.h>
> +
> +#include "mlx5_glue.h"
> +#include "mlx5_common_mp.h"
> +#include "mlx5_common_mr.h"
> +#include "mlx5_common_utils.h"
> +
> +struct mr_find_contig_memsegs_data {
> + uintptr_t addr;
> + uintptr_t start;
> + uintptr_t end;
> + const struct rte_memseg_list *msl;
> +};
> +
> +/**
> + * Expand B-tree table to a given size. Can't be called with holding
> + * memory_hotplug_lock or share_cache.rwlock due to rte_realloc().
> + *
> + * @param bt
> + * Pointer to B-tree structure.
> + * @param n
> + * Number of entries for expansion.
> + *
> + * @return
> + * 0 on success, -1 on failure.
> + */
> +static int
> +mr_btree_expand(struct mlx5_mr_btree *bt, int n) {
> + void *mem;
> + int ret = 0;
> +
> + if (n <= bt->size)
> + return ret;
> + /*
> + * Downside of directly using rte_realloc() is that SOCKET_ID_ANY is
> + * used inside if there's no room to expand. Because this is a quite
> + * rare case and a part of very slow path, it is very acceptable.
> + * Initially cache_bh[] will be given practically enough space and once
> + * it is expanded, expansion wouldn't be needed again ever.
> + */
> + mem = rte_realloc(bt->table, n * sizeof(struct mr_cache_entry), 0);
> + if (mem == NULL) {
> + /* Not an error, B-tree search will be skipped. */
> + DRV_LOG(WARNING, "failed to expand MR B-tree (%p) table",
> + (void *)bt);
> + ret = -1;
> + } else {
> + DRV_LOG(DEBUG, "expanded MR B-tree table (size=%u)", n);
> + bt->table = mem;
> + bt->size = n;
> + }
> + return ret;
> +}
> +
> +/**
> + * Look up LKey from given B-tree lookup table, store the last index
> +and return
> + * searched LKey.
> + *
> + * @param bt
> + * Pointer to B-tree structure.
> + * @param[out] idx
> + * Pointer to index. Even on search failure, returns index where it stops
> + * searching so that index can be used when inserting a new entry.
> + * @param addr
> + * Search key.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on no match.
> + */
> +static uint32_t
> +mr_btree_lookup(struct mlx5_mr_btree *bt, uint16_t *idx, uintptr_t
> +addr) {
> + struct mr_cache_entry *lkp_tbl;
> + uint16_t n;
> + uint16_t base = 0;
> +
> + MLX5_ASSERT(bt != NULL);
> + lkp_tbl = *bt->table;
> + n = bt->len;
> + /* First entry must be NULL for comparison. */
> + MLX5_ASSERT(bt->len > 0 || (lkp_tbl[0].start == 0 &&
> + lkp_tbl[0].lkey == UINT32_MAX));
> + /* Binary search. */
> + do {
> + register uint16_t delta = n >> 1;
> +
> + if (addr < lkp_tbl[base + delta].start) {
> + n = delta;
> + } else {
> + base += delta;
> + n -= delta;
> + }
> + } while (n > 1);
> + MLX5_ASSERT(addr >= lkp_tbl[base].start);
> + *idx = base;
> + if (addr < lkp_tbl[base].end)
> + return lkp_tbl[base].lkey;
> + /* Not found. */
> + return UINT32_MAX;
> +}
> +
> +/**
> + * Insert an entry to B-tree lookup table.
> + *
> + * @param bt
> + * Pointer to B-tree structure.
> + * @param entry
> + * Pointer to new entry to insert.
> + *
> + * @return
> + * 0 on success, -1 on failure.
> + */
> +static int
> +mr_btree_insert(struct mlx5_mr_btree *bt, struct mr_cache_entry *entry)
> +{
> + struct mr_cache_entry *lkp_tbl;
> + uint16_t idx = 0;
> + size_t shift;
> +
> + MLX5_ASSERT(bt != NULL);
> + MLX5_ASSERT(bt->len <= bt->size);
> + MLX5_ASSERT(bt->len > 0);
> + lkp_tbl = *bt->table;
> + /* Find out the slot for insertion. */
> + if (mr_btree_lookup(bt, &idx, entry->start) != UINT32_MAX) {
> + DRV_LOG(DEBUG,
> + "abort insertion to B-tree(%p): already exist at"
> + " idx=%u [0x%" PRIxPTR ", 0x%" PRIxPTR ") lkey=0x%x",
> + (void *)bt, idx, entry->start, entry->end, entry->lkey);
> + /* Already exist, return. */
> + return 0;
> + }
> + /* If table is full, return error. */
> + if (unlikely(bt->len == bt->size)) {
> + bt->overflow = 1;
> + return -1;
> + }
> + /* Insert entry. */
> + ++idx;
> + shift = (bt->len - idx) * sizeof(struct mr_cache_entry);
> + if (shift)
> + memmove(&lkp_tbl[idx + 1], &lkp_tbl[idx], shift);
> + lkp_tbl[idx] = *entry;
> + bt->len++;
> + DRV_LOG(DEBUG,
> + "inserted B-tree(%p)[%u],"
> + " [0x%" PRIxPTR ", 0x%" PRIxPTR ") lkey=0x%x",
> + (void *)bt, idx, entry->start, entry->end, entry->lkey);
> + return 0;
> +}
> +
> +/**
> + * Initialize B-tree and allocate memory for lookup table.
> + *
> + * @param bt
> + * Pointer to B-tree structure.
> + * @param n
> + * Number of entries to allocate.
> + * @param socket
> + * NUMA socket on which memory must be allocated.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +int
> +mlx5_mr_btree_init(struct mlx5_mr_btree *bt, int n, int socket) {
> + if (bt == NULL) {
> + rte_errno = EINVAL;
> + return -rte_errno;
> + }
> + MLX5_ASSERT(!bt->table && !bt->size);
> + memset(bt, 0, sizeof(*bt));
> + bt->table = rte_calloc_socket("B-tree table",
> + n, sizeof(struct mr_cache_entry),
> + 0, socket);
> + if (bt->table == NULL) {
> + rte_errno = ENOMEM;
> + DEBUG("failed to allocate memory for btree cache on socket
> %d",
> + socket);
> + return -rte_errno;
> + }
> + bt->size = n;
> + /* First entry must be NULL for binary search. */
> + (*bt->table)[bt->len++] = (struct mr_cache_entry) {
> + .lkey = UINT32_MAX,
> + };
> + DEBUG("initialized B-tree %p with table %p",
> + (void *)bt, (void *)bt->table);
> + return 0;
> +}
> +
> +/**
> + * Free B-tree resources.
> + *
> + * @param bt
> + * Pointer to B-tree structure.
> + */
> +void
> +mlx5_mr_btree_free(struct mlx5_mr_btree *bt) {
> + if (bt == NULL)
> + return;
> + DEBUG("freeing B-tree %p with table %p",
> + (void *)bt, (void *)bt->table);
> + rte_free(bt->table);
> + memset(bt, 0, sizeof(*bt));
> +}
> +
> +/**
> + * Dump all the entries in a B-tree
> + *
> + * @param bt
> + * Pointer to B-tree structure.
> + */
> +void
> +mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused) { #ifdef
> +RTE_LIBRTE_MLX5_DEBUG
> + int idx;
> + struct mr_cache_entry *lkp_tbl;
> +
> + if (bt == NULL)
> + return;
> + lkp_tbl = *bt->table;
> + for (idx = 0; idx < bt->len; ++idx) {
> + struct mr_cache_entry *entry = &lkp_tbl[idx];
> +
> + DEBUG("B-tree(%p)[%u],"
> + " [0x%" PRIxPTR ", 0x%" PRIxPTR ") lkey=0x%x",
> + (void *)bt, idx, entry->start, entry->end, entry->lkey);
> + }
> +#endif
> +}
> +
> +/**
> + * Find virtually contiguous memory chunk in a given MR.
> + *
> + * @param dev
> + * Pointer to MR structure.
> + * @param[out] entry
> + * Pointer to returning MR cache entry. If not found, this will not be
> + * updated.
> + * @param start_idx
> + * Start index of the memseg bitmap.
> + *
> + * @return
> + * Next index to go on lookup.
> + */
> +static int
> +mr_find_next_chunk(struct mlx5_mr *mr, struct mr_cache_entry *entry,
> + int base_idx)
> +{
> + uintptr_t start = 0;
> + uintptr_t end = 0;
> + uint32_t idx = 0;
> +
> + /* MR for external memory doesn't have memseg list. */
> + if (mr->msl == NULL) {
> + struct ibv_mr *ibv_mr = mr->ibv_mr;
> +
> + MLX5_ASSERT(mr->ms_bmp_n == 1);
> + MLX5_ASSERT(mr->ms_n == 1);
> + MLX5_ASSERT(base_idx == 0);
> + /*
> + * Can't search it from memseg list but get it directly from
> + * verbs MR as there's only one chunk.
> + */
> + entry->start = (uintptr_t)ibv_mr->addr;
> + entry->end = (uintptr_t)ibv_mr->addr + mr->ibv_mr->length;
> + entry->lkey = rte_cpu_to_be_32(mr->ibv_mr->lkey);
> + /* Returning 1 ends iteration. */
> + return 1;
> + }
> + for (idx = base_idx; idx < mr->ms_bmp_n; ++idx) {
> + if (rte_bitmap_get(mr->ms_bmp, idx)) {
> + const struct rte_memseg_list *msl;
> + const struct rte_memseg *ms;
> +
> + msl = mr->msl;
> + ms = rte_fbarray_get(&msl->memseg_arr,
> + mr->ms_base_idx + idx);
> + MLX5_ASSERT(msl->page_sz == ms->hugepage_sz);
> + if (!start)
> + start = ms->addr_64;
> + end = ms->addr_64 + ms->hugepage_sz;
> + } else if (start) {
> + /* Passed the end of a fragment. */
> + break;
> + }
> + }
> + if (start) {
> + /* Found one chunk. */
> + entry->start = start;
> + entry->end = end;
> + entry->lkey = rte_cpu_to_be_32(mr->ibv_mr->lkey);
> + }
> + return idx;
> +}
> +
> +/**
> + * Insert a MR to the global B-tree cache. It may fail due to low-on-memory.
> + * Then, this entry will have to be searched by mr_lookup_list() in
> + * mlx5_mr_create() on miss.
> + *
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param mr
> + * Pointer to MR to insert.
> + *
> + * @return
> + * 0 on success, -1 on failure.
> + */
> +int
> +mlx5_mr_insert_cache(struct mlx5_mr_share_cache *share_cache,
> + struct mlx5_mr *mr)
> +{
> + unsigned int n;
> +
> + DRV_LOG(DEBUG, "Inserting MR(%p) to global cache(%p)",
> + (void *)mr, (void *)share_cache);
> + for (n = 0; n < mr->ms_bmp_n; ) {
> + struct mr_cache_entry entry;
> +
> + memset(&entry, 0, sizeof(entry));
> + /* Find a contiguous chunk and advance the index. */
> + n = mr_find_next_chunk(mr, &entry, n);
> + if (!entry.end)
> + break;
> + if (mr_btree_insert(&share_cache->cache, &entry) < 0) {
> + /*
> + * Overflowed, but the global table cannot be
> expanded
> + * because of deadlock.
> + */
> + return -1;
> + }
> + }
> + return 0;
> +}
> +
> +/**
> + * Look up address in the original global MR list.
> + *
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param[out] entry
> + * Pointer to returning MR cache entry. If no match, this will not be
> updated.
> + * @param addr
> + * Search key.
> + *
> + * @return
> + * Found MR on match, NULL otherwise.
> + */
> +struct mlx5_mr *
> +mlx5_mr_lookup_list(struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr) {
> + struct mlx5_mr *mr;
> +
> + /* Iterate all the existing MRs. */
> + LIST_FOREACH(mr, &share_cache->mr_list, mr) {
> + unsigned int n;
> +
> + if (mr->ms_n == 0)
> + continue;
> + for (n = 0; n < mr->ms_bmp_n; ) {
> + struct mr_cache_entry ret;
> +
> + memset(&ret, 0, sizeof(ret));
> + n = mr_find_next_chunk(mr, &ret, n);
> + if (addr >= ret.start && addr < ret.end) {
> + /* Found. */
> + *entry = ret;
> + return mr;
> + }
> + }
> + }
> + return NULL;
> +}
> +
> +/**
> + * Look up address on global MR cache.
> + *
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param[out] entry
> + * Pointer to returning MR cache entry. If no match, this will not be
> updated.
> + * @param addr
> + * Search key.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on failure and rte_errno is set.
> + */
> +uint32_t
> +mlx5_mr_lookup_cache(struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr) {
> + uint16_t idx;
> + uint32_t lkey = UINT32_MAX;
> + struct mlx5_mr *mr;
> +
> + /*
> + * If the global cache has overflowed since it failed to expand the
> + * B-tree table, it can't have all the existing MRs. Then, the address
> + * has to be searched by traversing the original MR list instead, which
> + * is very slow path. Otherwise, the global cache is all inclusive.
> + */
> + if (!unlikely(share_cache->cache.overflow)) {
> + lkey = mr_btree_lookup(&share_cache->cache, &idx, addr);
> + if (lkey != UINT32_MAX)
> + *entry = (*share_cache->cache.table)[idx];
> + } else {
> + /* Falling back to the slowest path. */
> + mr = mlx5_mr_lookup_list(share_cache, entry, addr);
> + if (mr != NULL)
> + lkey = entry->lkey;
> + }
> + MLX5_ASSERT(lkey == UINT32_MAX || (addr >= entry->start &&
> + addr < entry->end));
> + return lkey;
> +}
> +
> +/**
> + * Free MR resources. MR lock must not be held to avoid a deadlock.
> +rte_free()
> + * can raise memory free event and the callback function will spin on the
> lock.
> + *
> + * @param mr
> + * Pointer to MR to free.
> + */
> +static void
> +mr_free(struct mlx5_mr *mr)
> +{
> + if (mr == NULL)
> + return;
> + DRV_LOG(DEBUG, "freeing MR(%p):", (void *)mr);
> + if (mr->ibv_mr != NULL)
> + claim_zero(mlx5_glue->dereg_mr(mr->ibv_mr));
> + if (mr->ms_bmp != NULL)
> + rte_bitmap_free(mr->ms_bmp);
> + rte_free(mr);
> +}
> +
> +void
> +mlx5_mr_rebuild_cache(struct mlx5_mr_share_cache *share_cache) {
> + struct mlx5_mr *mr;
> +
> + DRV_LOG(DEBUG, "Rebuild dev cache[] %p", (void *)share_cache);
> + /* Flush cache to rebuild. */
> + share_cache->cache.len = 1;
> + share_cache->cache.overflow = 0;
> + /* Iterate all the existing MRs. */
> + LIST_FOREACH(mr, &share_cache->mr_list, mr)
> + if (mlx5_mr_insert_cache(share_cache, mr) < 0)
> + return;
> +}
> +
> +/**
> + * Release resources of detached MR having no online entry.
> + *
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + */
> +static void
> +mlx5_mr_garbage_collect(struct mlx5_mr_share_cache *share_cache) {
> + struct mlx5_mr *mr_next;
> + struct mlx5_mr_list free_list = LIST_HEAD_INITIALIZER(free_list);
> +
> + /* Must be called from the primary process. */
> + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY);
> + /*
> + * MR can't be freed with holding the lock because rte_free() could
> call
> + * memory free callback function. This will be a deadlock situation.
> + */
> + rte_rwlock_write_lock(&share_cache->rwlock);
> + /* Detach the whole free list and release it after unlocking. */
> + free_list = share_cache->mr_free_list;
> + LIST_INIT(&share_cache->mr_free_list);
> + rte_rwlock_write_unlock(&share_cache->rwlock);
> + /* Release resources. */
> + mr_next = LIST_FIRST(&free_list);
> + while (mr_next != NULL) {
> + struct mlx5_mr *mr = mr_next;
> +
> + mr_next = LIST_NEXT(mr, mr);
> + mr_free(mr);
> + }
> +}
> +
> +/* Called during rte_memseg_contig_walk() by mlx5_mr_create(). */
> +static int mr_find_contig_memsegs_cb(const struct rte_memseg_list *msl,
> + const struct rte_memseg *ms, size_t len, void *arg) {
> + struct mr_find_contig_memsegs_data *data = arg;
> +
> + if (data->addr < ms->addr_64 || data->addr >= ms->addr_64 + len)
> + return 0;
> + /* Found, save it and stop walking. */
> + data->start = ms->addr_64;
> + data->end = ms->addr_64 + len;
> + data->msl = msl;
> + return 1;
> +}
> +
> +/**
> + * Create a new global Memory Region (MR) for a missing virtual address.
> + * This API should be called on a secondary process, then a request is
> +sent to
> + * the primary process in order to create a MR for the address. As the
> +global MR
> + * list is on the shared memory, following LKey lookup should succeed
> +unless the
> + * request fails.
> + *
> + * @param pd
> + * Pointer to ibv_pd of a device (net, regex, vdpa,...).
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param[out] entry
> + * Pointer to returning MR cache entry, found in the global cache or newly
> + * created. If failed to create one, this will not be updated.
> + * @param addr
> + * Target virtual address to register.
> + * @param mr_ext_memseg_en
> + * Configurable flag about external memory segment enable or not.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on failure and rte_errno is set.
> + */
> +static uint32_t
> +mlx5_mr_create_secondary(struct ibv_pd *pd __rte_unused,
> + struct mlx5_mp_id *mp_id,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr,
> + unsigned int mr_ext_memseg_en __rte_unused) {
> + int ret;
> +
> + DEBUG("port %u requesting MR creation for address (%p)",
> + mp_id->port_id, (void *)addr);
> + ret = mlx5_mp_req_mr_create(mp_id, addr);
> + if (ret) {
> + DEBUG("Fail to request MR creation for address (%p)",
> + (void *)addr);
> + return UINT32_MAX;
> + }
> + rte_rwlock_read_lock(&share_cache->rwlock);
> + /* Fill in output data. */
> + mlx5_mr_lookup_cache(share_cache, entry, addr);
> + /* Lookup can't fail. */
> + MLX5_ASSERT(entry->lkey != UINT32_MAX);
> + rte_rwlock_read_unlock(&share_cache->rwlock);
> + DEBUG("MR CREATED by primary process for %p:\n"
> + " [0x%" PRIxPTR ", 0x%" PRIxPTR "), lkey=0x%x",
> + (void *)addr, entry->start, entry->end, entry->lkey);
> + return entry->lkey;
> +}
> +
> +/**
> + * Create a new global Memory Region (MR) for a missing virtual address.
> + * Register entire virtually contiguous memory chunk around the address.
> + *
> + * @param pd
> + * Pointer to ibv_pd of a device (net, regex, vdpa,...).
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param[out] entry
> + * Pointer to returning MR cache entry, found in the global cache or newly
> + * created. If failed to create one, this will not be updated.
> + * @param addr
> + * Target virtual address to register.
> + * @param mr_ext_memseg_en
> + * Configurable flag about external memory segment enable or not.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on failure and rte_errno is set.
> + */
> +uint32_t
> +mlx5_mr_create_primary(struct ibv_pd *pd,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr,
> + unsigned int mr_ext_memseg_en) {
> + struct mr_find_contig_memsegs_data data = {.addr = addr, };
> + struct mr_find_contig_memsegs_data data_re;
> + const struct rte_memseg_list *msl;
> + const struct rte_memseg *ms;
> + struct mlx5_mr *mr = NULL;
> + int ms_idx_shift = -1;
> + uint32_t bmp_size;
> + void *bmp_mem;
> + uint32_t ms_n;
> + uint32_t n;
> + size_t len;
> +
> + DRV_LOG(DEBUG, "Creating a MR using address (%p)", (void *)addr);
> + /*
> + * Release detached MRs if any. This can't be called with holding
> either
> + * memory_hotplug_lock or share_cache->rwlock. MRs on the free list
> have
> + * been detached by the memory free event but it couldn't be
> released
> + * inside the callback due to deadlock. As a result, releasing resources
> + * is quite opportunistic.
> + */
> + mlx5_mr_garbage_collect(share_cache);
> + /*
> + * If enabled, find out a contiguous virtual address chunk in use, to
> + * which the given address belongs, in order to register maximum
> range.
> + * In the best case where mempools are not dynamically recreated
> and
> + * '--socket-mem' is specified as an EAL option, it is very likely to
> + * have only one MR(LKey) per a socket and per a hugepage-size even
> + * though the system memory is highly fragmented. As the whole
> memory
> + * chunk will be pinned by kernel, it can't be reused unless entire
> + * chunk is freed from EAL.
> + *
> + * If disabled, just register one memseg (page). Then, memory
> + * consumption will be minimized but it may drop performance if
> there
> + * are many MRs to lookup on the datapath.
> + */
> + if (!mr_ext_memseg_en) {
> + data.msl = rte_mem_virt2memseg_list((void *)addr);
> + data.start = RTE_ALIGN_FLOOR(addr, data.msl->page_sz);
> + data.end = data.start + data.msl->page_sz;
> + } else if (!rte_memseg_contig_walk(mr_find_contig_memsegs_cb,
> &data)) {
> + DRV_LOG(WARNING,
> + "Unable to find virtually contiguous"
> + " chunk for address (%p)."
> + " rte_memseg_contig_walk() failed.", (void *)addr);
> + rte_errno = ENXIO;
> + goto err_nolock;
> + }
> +alloc_resources:
> + /* Addresses must be page-aligned. */
> + MLX5_ASSERT(data.msl);
> + MLX5_ASSERT(rte_is_aligned((void *)data.start, data.msl->page_sz));
> + MLX5_ASSERT(rte_is_aligned((void *)data.end, data.msl->page_sz));
> + msl = data.msl;
> + ms = rte_mem_virt2memseg((void *)data.start, msl);
> + len = data.end - data.start;
> + MLX5_ASSERT(ms);
> + MLX5_ASSERT(msl->page_sz == ms->hugepage_sz);
> + /* Number of memsegs in the range. */
> + ms_n = len / msl->page_sz;
> + DEBUG("Extending %p to [0x%" PRIxPTR ", 0x%" PRIxPTR "),"
> + " page_sz=0x%" PRIx64 ", ms_n=%u",
> + (void *)addr, data.start, data.end, msl->page_sz, ms_n);
> + /* Size of memory for bitmap. */
> + bmp_size = rte_bitmap_get_memory_footprint(ms_n);
> + mr = rte_zmalloc_socket(NULL,
> + RTE_ALIGN_CEIL(sizeof(*mr),
> + RTE_CACHE_LINE_SIZE) +
> + bmp_size,
> + RTE_CACHE_LINE_SIZE, msl->socket_id);
> + if (mr == NULL) {
> + DEBUG("Unable to allocate memory for a new MR of"
> + " address (%p).", (void *)addr);
> + rte_errno = ENOMEM;
> + goto err_nolock;
> + }
> + mr->msl = msl;
> + /*
> + * Save the index of the first memseg and initialize memseg bitmap.
> To
> + * see if a memseg of ms_idx in the memseg-list is still valid, check:
> + * rte_bitmap_get(mr->bmp, ms_idx - mr->ms_base_idx)
> + */
> + mr->ms_base_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms);
> + bmp_mem = RTE_PTR_ALIGN_CEIL(mr + 1, RTE_CACHE_LINE_SIZE);
> + mr->ms_bmp = rte_bitmap_init(ms_n, bmp_mem, bmp_size);
> + if (mr->ms_bmp == NULL) {
> + DEBUG("Unable to initialize bitmap for a new MR of"
> + " address (%p).", (void *)addr);
> + rte_errno = EINVAL;
> + goto err_nolock;
> + }
> + /*
> + * Should recheck whether the extended contiguous chunk is still
> valid.
> + * Because memory_hotplug_lock can't be held if there's any memory
> + * related calls in a critical path, resource allocation above can't be
> + * locked. If the memory has been changed at this point, try again
> with
> + * just single page. If not, go on with the big chunk atomically from
> + * here.
> + */
> + rte_mcfg_mem_read_lock();
> + data_re = data;
> + if (len > msl->page_sz &&
> + !rte_memseg_contig_walk(mr_find_contig_memsegs_cb, &data_re))
> {
> + DEBUG("Unable to find virtually contiguous"
> + " chunk for address (%p)."
> + " rte_memseg_contig_walk() failed.", (void *)addr);
> + rte_errno = ENXIO;
> + goto err_memlock;
> + }
> + if (data.start != data_re.start || data.end != data_re.end) {
> + /*
> + * The extended contiguous chunk has been changed. Try
> again
> + * with single memseg instead.
> + */
> + data.start = RTE_ALIGN_FLOOR(addr, msl->page_sz);
> + data.end = data.start + msl->page_sz;
> + rte_mcfg_mem_read_unlock();
> + mr_free(mr);
> + goto alloc_resources;
> + }
> + MLX5_ASSERT(data.msl == data_re.msl);
> + rte_rwlock_write_lock(&share_cache->rwlock);
> + /*
> + * Check the address is really missing. If other thread already created
> + * one or it is not found due to overflow, abort and return.
> + */
> + if (mlx5_mr_lookup_cache(share_cache, entry, addr) != UINT32_MAX)
> {
> + /*
> + * Insert to the global cache table. It may fail due to
> + * low-on-memory. Then, this entry will have to be searched
> + * here again.
> + */
> + mr_btree_insert(&share_cache->cache, entry);
> + DEBUG("Found MR for %p on final lookup, abort", (void
> *)addr);
> + rte_rwlock_write_unlock(&share_cache->rwlock);
> + rte_mcfg_mem_read_unlock();
> + /*
> + * Must be unlocked before calling rte_free() because
> + * mlx5_mr_mem_event_free_cb() can be called inside.
> + */
> + mr_free(mr);
> + return entry->lkey;
> + }
> + /*
> + * Trim start and end addresses for verbs MR. Set bits for registering
> + * memsegs but exclude already registered ones. Bitmap can be
> + * fragmented.
> + */
> + for (n = 0; n < ms_n; ++n) {
> + uintptr_t start;
> + struct mr_cache_entry ret;
> +
> + memset(&ret, 0, sizeof(ret));
> + start = data_re.start + n * msl->page_sz;
> + /* Exclude memsegs already registered by other MRs. */
> + if (mlx5_mr_lookup_cache(share_cache, &ret, start) ==
> + UINT32_MAX) {
> + /*
> + * Start from the first unregistered memseg in the
> + * extended range.
> + */
> + if (ms_idx_shift == -1) {
> + mr->ms_base_idx += n;
> + data.start = start;
> + ms_idx_shift = n;
> + }
> + data.end = start + msl->page_sz;
> + rte_bitmap_set(mr->ms_bmp, n - ms_idx_shift);
> + ++mr->ms_n;
> + }
> + }
> + len = data.end - data.start;
> + mr->ms_bmp_n = len / msl->page_sz;
> + MLX5_ASSERT(ms_idx_shift + mr->ms_bmp_n <= ms_n);
> + /*
> + * Finally create a verbs MR for the memory chunk. ibv_reg_mr() can
> be
> + * called with holding the memory lock because it doesn't use
> + * mlx5_alloc_buf_extern() which eventually calls rte_malloc_socket()
> + * through mlx5_alloc_verbs_buf().
> + */
> + mr->ibv_mr = mlx5_glue->reg_mr(pd, (void *)data.start, len,
> + IBV_ACCESS_LOCAL_WRITE |
> + IBV_ACCESS_RELAXED_ORDERING);
> + if (mr->ibv_mr == NULL) {
> + DEBUG("Fail to create a verbs MR for address (%p)",
> + (void *)addr);
> + rte_errno = EINVAL;
> + goto err_mrlock;
> + }
> + MLX5_ASSERT((uintptr_t)mr->ibv_mr->addr == data.start);
> + MLX5_ASSERT(mr->ibv_mr->length == len);
> + LIST_INSERT_HEAD(&share_cache->mr_list, mr, mr);
> + DEBUG("MR CREATED (%p) for %p:\n"
> + " [0x%" PRIxPTR ", 0x%" PRIxPTR "),"
> + " lkey=0x%x base_idx=%u ms_n=%u, ms_bmp_n=%u",
> + (void *)mr, (void *)addr, data.start, data.end,
> + rte_cpu_to_be_32(mr->ibv_mr->lkey),
> + mr->ms_base_idx, mr->ms_n, mr->ms_bmp_n);
> + /* Insert to the global cache table. */
> + mlx5_mr_insert_cache(share_cache, mr);
> + /* Fill in output data. */
> + mlx5_mr_lookup_cache(share_cache, entry, addr);
> + /* Lookup can't fail. */
> + MLX5_ASSERT(entry->lkey != UINT32_MAX);
> + rte_rwlock_write_unlock(&share_cache->rwlock);
> + rte_mcfg_mem_read_unlock();
> + return entry->lkey;
> +err_mrlock:
> + rte_rwlock_write_unlock(&share_cache->rwlock);
> +err_memlock:
> + rte_mcfg_mem_read_unlock();
> +err_nolock:
> + /*
> + * In case of error, as this can be called in a datapath, a warning
> + * message per an error is preferable instead. Must be unlocked
> before
> + * calling rte_free() because mlx5_mr_mem_event_free_cb() can be
> called
> + * inside.
> + */
> + mr_free(mr);
> + return UINT32_MAX;
> +}
> +
> +/**
> + * Create a new global Memory Region (MR) for a missing virtual address.
> + * This can be called from primary and secondary process.
> + *
> + * @param pd
> + * Pointer to ibv_pd of a device (net, regex, vdpa,...).
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param[out] entry
> + * Pointer to returning MR cache entry, found in the global cache or newly
> + * created. If failed to create one, this will not be updated.
> + * @param addr
> + * Target virtual address to register.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on failure and rte_errno is set.
> + */
> +static uint32_t
> +mlx5_mr_create(struct ibv_pd *pd, struct mlx5_mp_id *mp_id,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr,
> + unsigned int mr_ext_memseg_en)
> +{
> + uint32_t ret = 0;
> +
> + switch (rte_eal_process_type()) {
> + case RTE_PROC_PRIMARY:
> + ret = mlx5_mr_create_primary(pd, share_cache, entry,
> + addr, mr_ext_memseg_en);
> + break;
> + case RTE_PROC_SECONDARY:
> + ret = mlx5_mr_create_secondary(pd, mp_id, share_cache,
> entry,
> + addr, mr_ext_memseg_en);
> + break;
> + default:
> + break;
> + }
> + return ret;
> +}
> +
> +/**
> + * Look up address in the global MR cache table. If not found, create a new
> MR.
> + * Insert the found/created entry to local bottom-half cache table.
> + *
> + * @param pd
> + * Pointer to ibv_pd of a device (net, regex, vdpa,...).
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param mr_ctrl
> + * Pointer to per-queue MR control structure.
> + * @param[out] entry
> + * Pointer to returning MR cache entry, found in the global cache or newly
> + * created. If failed to create one, this is not written.
> + * @param addr
> + * Search key.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on no match.
> + */
> +static uint32_t
> +mr_lookup_caches(struct ibv_pd *pd, struct mlx5_mp_id *mp_id,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mlx5_mr_ctrl *mr_ctrl,
> + struct mr_cache_entry *entry, uintptr_t addr,
> + unsigned int mr_ext_memseg_en)
> +{
> + struct mlx5_mr_btree *bt = &mr_ctrl->cache_bh;
> + uint32_t lkey;
> + uint16_t idx;
> +
> + /* If local cache table is full, try to double it. */
> + if (unlikely(bt->len == bt->size))
> + mr_btree_expand(bt, bt->size << 1);
> + /* Look up in the global cache. */
> + rte_rwlock_read_lock(&share_cache->rwlock);
> + lkey = mr_btree_lookup(&share_cache->cache, &idx, addr);
> + if (lkey != UINT32_MAX) {
> + /* Found. */
> + *entry = (*share_cache->cache.table)[idx];
> + rte_rwlock_read_unlock(&share_cache->rwlock);
> + /*
> + * Update local cache. Even if it fails, return the found entry
> + * to update top-half cache. Next time, this entry will be
> found
> + * in the global cache.
> + */
> + mr_btree_insert(bt, entry);
> + return lkey;
> + }
> + rte_rwlock_read_unlock(&share_cache->rwlock);
> + /* First time to see the address? Create a new MR. */
> + lkey = mlx5_mr_create(pd, mp_id, share_cache, entry, addr,
> + mr_ext_memseg_en);
> + /*
> + * Update the local cache if successfully created a new global MR.
> Even
> + * if failed to create one, there's no action to take in this datapath
> + * code. As returning LKey is invalid, this will eventually make HW
> + * fail.
> + */
> + if (lkey != UINT32_MAX)
> + mr_btree_insert(bt, entry);
> + return lkey;
> +}
> +
> +/**
> + * Bottom-half of LKey search on datapath. First search in cache_bh[]
> +and if
> + * misses, search in the global MR cache table and update the new entry
> +to
> + * per-queue local caches.
> + *
> + * @param pd
> + * Pointer to ibv_pd of a device (net, regex, vdpa,...).
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + * @param mr_ctrl
> + * Pointer to per-queue MR control structure.
> + * @param addr
> + * Search key.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on no match.
> + */
> +uint32_t mlx5_mr_addr2mr_bh(struct ibv_pd *pd, struct mlx5_mp_id
> *mp_id,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mlx5_mr_ctrl *mr_ctrl,
> + uintptr_t addr, unsigned int mr_ext_memseg_en) {
> + uint32_t lkey;
> + uint16_t bh_idx = 0;
> + /* Victim in top-half cache to replace with new entry. */
> + struct mr_cache_entry *repl = &mr_ctrl->cache[mr_ctrl->head];
> +
> + /* Binary-search MR translation table. */
> + lkey = mr_btree_lookup(&mr_ctrl->cache_bh, &bh_idx, addr);
> + /* Update top-half cache. */
> + if (likely(lkey != UINT32_MAX)) {
> + *repl = (*mr_ctrl->cache_bh.table)[bh_idx];
> + } else {
> + /*
> + * If missed in local lookup table, search in the global cache
> + * and local cache_bh[] will be updated inside if possible.
> + * Top-half cache entry will also be updated.
> + */
> + lkey = mr_lookup_caches(pd, mp_id, share_cache, mr_ctrl,
> + repl, addr, mr_ext_memseg_en);
> + if (unlikely(lkey == UINT32_MAX))
> + return UINT32_MAX;
> + }
> + /* Update the most recently used entry. */
> + mr_ctrl->mru = mr_ctrl->head;
> + /* Point to the next victim, the oldest. */
> + mr_ctrl->head = (mr_ctrl->head + 1) % MLX5_MR_CACHE_N;
> + return lkey;
> +}
> +
> +/**
> + * Release all the created MRs and resources on global MR cache of a device.
> + * list.
> + *
> + * @param share_cache
> + * Pointer to a global shared MR cache.
> + */
> +void
> +mlx5_mr_release_cache(struct mlx5_mr_share_cache *share_cache) {
> + struct mlx5_mr *mr_next;
> +
> + rte_rwlock_write_lock(&share_cache->rwlock);
> + /* Detach from MR list and move to free list. */
> + mr_next = LIST_FIRST(&share_cache->mr_list);
> + while (mr_next != NULL) {
> + struct mlx5_mr *mr = mr_next;
> +
> + mr_next = LIST_NEXT(mr, mr);
> + LIST_REMOVE(mr, mr);
> + LIST_INSERT_HEAD(&share_cache->mr_free_list, mr, mr);
> + }
> + LIST_INIT(&share_cache->mr_list);
> + /* Free global cache. */
> + mlx5_mr_btree_free(&share_cache->cache);
> + rte_rwlock_write_unlock(&share_cache->rwlock);
> + /* Free all remaining MRs. */
> + mlx5_mr_garbage_collect(share_cache);
> +}
> +
> +/**
> + * Flush all of the local cache entries.
> + *
> + * @param mr_ctrl
> + * Pointer to per-queue MR local cache.
> + */
> +void
> +mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl) {
> + /* Reset the most-recently-used index. */
> + mr_ctrl->mru = 0;
> + /* Reset the linear search array. */
> + mr_ctrl->head = 0;
> + memset(mr_ctrl->cache, 0, sizeof(mr_ctrl->cache));
> + /* Reset the B-tree table. */
> + mr_ctrl->cache_bh.len = 1;
> + mr_ctrl->cache_bh.overflow = 0;
> + /* Update the generation number. */
> + mr_ctrl->cur_gen = *mr_ctrl->dev_gen_ptr;
> + DRV_LOG(DEBUG, "mr_ctrl(%p): flushed, cur_gen=%d",
> + (void *)mr_ctrl, mr_ctrl->cur_gen);
> +}
> +
> +/**
> + * Creates a memory region for external memory, that is memory which is
> +not
> + * part of the DPDK memory segments.
> + *
> + * @param pd
> + * Pointer to ibv_pd of a device (net, regex, vdpa,...).
> + * @param addr
> + * Starting virtual address of memory.
> + * @param len
> + * Length of memory segment being mapped.
> + * @param socked_id
> + * Socket to allocate heap memory for the control structures.
> + *
> + * @return
> + * Pointer to MR structure on success, NULL otherwise.
> + */
> +struct mlx5_mr *
> +mlx5_create_mr_ext(struct ibv_pd *pd, uintptr_t addr, size_t len, int
> +socket_id) {
> + struct mlx5_mr *mr = NULL;
> +
> + mr = rte_zmalloc_socket(NULL,
> + RTE_ALIGN_CEIL(sizeof(*mr),
> + RTE_CACHE_LINE_SIZE),
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (mr == NULL)
> + return NULL;
> + mr->ibv_mr = mlx5_glue->reg_mr(pd, (void *)addr, len,
> + IBV_ACCESS_LOCAL_WRITE |
> + IBV_ACCESS_RELAXED_ORDERING);
> + if (mr->ibv_mr == NULL) {
> + DRV_LOG(WARNING,
> + "Fail to create a verbs MR for address (%p)",
> + (void *)addr);
> + rte_free(mr);
> + return NULL;
> + }
> + mr->msl = NULL; /* Mark it is external memory. */
> + mr->ms_bmp = NULL;
> + mr->ms_n = 1;
> + mr->ms_bmp_n = 1;
> + DRV_LOG(DEBUG,
> + "MR CREATED (%p) for external memory %p:\n"
> + " [0x%" PRIxPTR ", 0x%" PRIxPTR "),"
> + " lkey=0x%x base_idx=%u ms_n=%u, ms_bmp_n=%u",
> + (void *)mr, (void *)addr,
> + addr, addr + len, rte_cpu_to_be_32(mr->ibv_mr->lkey),
> + mr->ms_base_idx, mr->ms_n, mr->ms_bmp_n);
> + return mr;
> +}
> +
> +/**
> + * Dump all the created MRs and the global cache entries.
> + *
> + * @param sh
> + * Pointer to Ethernet device shared context.
> + */
> +void
> +mlx5_mr_dump_cache(struct mlx5_mr_share_cache *share_cache
> +__rte_unused) { #ifdef RTE_LIBRTE_MLX5_DEBUG
> + struct mlx5_mr *mr;
> + int mr_n = 0;
> + int chunk_n = 0;
> +
> + rte_rwlock_read_lock(&share_cache->rwlock);
> + /* Iterate all the existing MRs. */
> + LIST_FOREACH(mr, &share_cache->mr_list, mr) {
> + unsigned int n;
> +
> + DEBUG("MR[%u], LKey = 0x%x, ms_n = %u, ms_bmp_n = %u",
> + mr_n++, rte_cpu_to_be_32(mr->ibv_mr->lkey),
> + mr->ms_n, mr->ms_bmp_n);
> + if (mr->ms_n == 0)
> + continue;
> + for (n = 0; n < mr->ms_bmp_n; ) {
> + struct mr_cache_entry ret = { 0, };
> +
> + n = mr_find_next_chunk(mr, &ret, n);
> + if (!ret.end)
> + break;
> + DEBUG(" chunk[%u], [0x%" PRIxPTR ", 0x%" PRIxPTR
> ")",
> + chunk_n++, ret.start, ret.end);
> + }
> + }
> + DEBUG("Dumping global cache %p", (void *)share_cache);
> + mlx5_mr_btree_dump(&share_cache->cache);
> + rte_rwlock_read_unlock(&share_cache->rwlock);
> +#endif
> +}
> diff --git a/drivers/common/mlx5/mlx5_common_mr.h
> b/drivers/common/mlx5/mlx5_common_mr.h
> new file mode 100644
> index 0000000000..e805f96375
> --- /dev/null
> +++ b/drivers/common/mlx5/mlx5_common_mr.h
> @@ -0,0 +1,160 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2018 6WIND S.A.
> + * Copyright 2018 Mellanox Technologies, Ltd */
> +
> +#ifndef RTE_PMD_MLX5_COMMON_MR_H_
> +#define RTE_PMD_MLX5_COMMON_MR_H_
> +
> +#include <stddef.h>
> +#include <stdint.h>
> +#include <sys/queue.h>
> +
> +/* Verbs header. */
> +/* ISO C doesn't support unnamed structs/unions, disabling -pedantic.
> +*/ #ifdef PEDANTIC #pragma GCC diagnostic ignored "-Wpedantic"
> +#endif
> +#include <infiniband/verbs.h>
> +#include <infiniband/mlx5dv.h>
> +#ifdef PEDANTIC
> +#pragma GCC diagnostic error "-Wpedantic"
> +#endif
> +
> +#include <rte_rwlock.h>
> +#include <rte_bitmap.h>
> +#include <rte_memory.h>
> +
> +#include "mlx5_common_mp.h"
> +
> +/* Size of per-queue MR cache array for linear search. */ #define
> +MLX5_MR_CACHE_N 8 #define MLX5_MR_BTREE_CACHE_N 256
> +
> +/* Memory Region object. */
> +struct mlx5_mr {
> + LIST_ENTRY(mlx5_mr) mr; /**< Pointer to the prev/next entry. */
> + struct ibv_mr *ibv_mr; /* Verbs Memory Region. */
> + const struct rte_memseg_list *msl;
> + int ms_base_idx; /* Start index of msl->memseg_arr[]. */
> + int ms_n; /* Number of memsegs in use. */
> + uint32_t ms_bmp_n; /* Number of bits in memsegs bit-mask. */
> + struct rte_bitmap *ms_bmp; /* Bit-mask of memsegs belonged to MR.
> */
> +};
> +
> +/* Cache entry for Memory Region. */
> +struct mr_cache_entry {
> + uintptr_t start; /* Start address of MR. */
> + uintptr_t end; /* End address of MR. */
> + uint32_t lkey; /* rte_cpu_to_be_32(ibv_mr->lkey). */ } __rte_packed;
> +
> +/* MR Cache table for Binary search. */ struct mlx5_mr_btree {
> + uint16_t len; /* Number of entries. */
> + uint16_t size; /* Total number of entries. */
> + int overflow; /* Mark failure of table expansion. */
> + struct mr_cache_entry (*table)[];
> +} __rte_packed;
> +
> +/* Per-queue MR control descriptor. */
> +struct mlx5_mr_ctrl {
> + uint32_t *dev_gen_ptr; /* Generation number of device to poll. */
> + uint32_t cur_gen; /* Generation number saved to flush caches. */
> + uint16_t mru; /* Index of last hit entry in top-half cache. */
> + uint16_t head; /* Index of the oldest entry in top-half cache. */
> + struct mr_cache_entry cache[MLX5_MR_CACHE_N]; /* Cache for top-
> half. */
> + struct mlx5_mr_btree cache_bh; /* Cache for bottom-half. */ }
> +__rte_packed;
> +
> +LIST_HEAD(mlx5_mr_list, mlx5_mr);
> +
> +/* Global per-device MR cache. */
> +struct mlx5_mr_share_cache {
> + uint32_t dev_gen; /* Generation number to flush local caches. */
> + rte_rwlock_t rwlock; /* MR cache Lock. */
> + struct mlx5_mr_btree cache; /* Global MR cache table. */
> + struct mlx5_mr_list mr_list; /* Registered MR list. */
> + struct mlx5_mr_list mr_free_list; /* Freed MR list. */ } __rte_packed;
> +
> +/**
> + * Look up LKey from given lookup table by linear search. Firstly look
> +up the
> + * last-hit entry. If miss, the entire array is searched. If found,
> +update the
> + * last-hit index and return LKey.
> + *
> + * @param lkp_tbl
> + * Pointer to lookup table.
> + * @param[in,out] cached_idx
> + * Pointer to last-hit index.
> + * @param n
> + * Size of lookup table.
> + * @param addr
> + * Search key.
> + *
> + * @return
> + * Searched LKey on success, UINT32_MAX on no match.
> + */
> +static __rte_always_inline uint32_t
> +mlx5_mr_lookup_lkey(struct mr_cache_entry *lkp_tbl, uint16_t *cached_idx,
> + uint16_t n, uintptr_t addr)
> +{
> + uint16_t idx;
> +
> + if (likely(addr >= lkp_tbl[*cached_idx].start &&
> + addr < lkp_tbl[*cached_idx].end))
> + return lkp_tbl[*cached_idx].lkey;
> + for (idx = 0; idx < n && lkp_tbl[idx].start != 0; ++idx) {
> + if (addr >= lkp_tbl[idx].start &&
> + addr < lkp_tbl[idx].end) {
> + /* Found. */
> + *cached_idx = idx;
> + return lkp_tbl[idx].lkey;
> + }
> + }
> + return UINT32_MAX;
> +}
> +
> +__rte_experimental
> +int mlx5_mr_btree_init(struct mlx5_mr_btree *bt, int n, int socket);
> +__rte_experimental void mlx5_mr_btree_free(struct mlx5_mr_btree *bt);
> +__rte_experimental void mlx5_mr_btree_dump(struct mlx5_mr_btree *bt
> +__rte_unused); __rte_experimental uint32_t mlx5_mr_addr2mr_bh(struct
> +ibv_pd *pd, struct mlx5_mp_id *mp_id,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mlx5_mr_ctrl *mr_ctrl,
> + uintptr_t addr, unsigned int mr_ext_memseg_en);
> +__rte_experimental void mlx5_mr_release_cache(struct
> +mlx5_mr_share_cache *mr_cache); __rte_experimental void
> +mlx5_mr_dump_cache(struct mlx5_mr_share_cache *share_cache
> +__rte_unused); __rte_experimental void mlx5_mr_rebuild_cache(struct
> +mlx5_mr_share_cache *share_cache); __rte_experimental void
> +mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl);
> +__rte_experimental int mlx5_mr_insert_cache(struct mlx5_mr_share_cache
> +*share_cache,
> + struct mlx5_mr *mr);
> +__rte_experimental
> +uint32_t
> +mlx5_mr_lookup_cache(struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr);
> +__rte_experimental struct mlx5_mr * mlx5_mr_lookup_list(struct
> +mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr);
> __rte_experimental
> +struct mlx5_mr * mlx5_create_mr_ext(struct ibv_pd *pd, uintptr_t addr,
> +size_t len,
> + int socket_id);
> +__rte_experimental
> +uint32_t
> +mlx5_mr_create_primary(struct ibv_pd *pd,
> + struct mlx5_mr_share_cache *share_cache,
> + struct mr_cache_entry *entry, uintptr_t addr,
> + unsigned int mr_ext_memseg_en);
> +
> +#endif /* RTE_PMD_MLX5_COMMON_MR_H_ */
> diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map
> b/drivers/common/mlx5/rte_common_mlx5_version.map
> index 265703d1c9..b58a378278 100644
> --- a/drivers/common/mlx5/rte_common_mlx5_version.map
> +++ b/drivers/common/mlx5/rte_common_mlx5_version.map
> @@ -61,4 +61,18 @@ EXPERIMENTAL {
> mlx5_mp_req_mr_create;
> mlx5_mp_req_queue_state_modify;
> mlx5_mp_req_verbs_cmd_fd;
> +
> + mlx5_mr_btree_init;
> + mlx5_mr_btree_free;
> + mlx5_mr_btree_dump;
> + mlx5_mr_addr2mr_bh;
> + mlx5_mr_release_cache;
> + mlx5_mr_dump_cache;
> + mlx5_mr_rebuild_cache;
> + mlx5_mr_insert_cache;
> + mlx5_mr_lookup_cache;
> + mlx5_mr_lookup_list;
> + mlx5_create_mr_ext;
> + mlx5_mr_create_primary;
> + mlx5_mr_flush_local_cache;
> };
> --
> 2.16.6
next prev parent reply other threads:[~2020-04-08 9:04 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-02 19:21 [dpdk-dev] [PATCH 0/4] refactor multi-process IPC and memory management codes to common driver Vu Pham
2020-04-02 19:21 ` [dpdk-dev] [PATCH 1/4] common/mlx5: refactor multi-process IPC handling " Vu Pham
2020-04-02 19:21 ` [dpdk-dev] [PATCH 2/4] net/mlx5: modify net PMD to use common multi-process APIs Vu Pham
2020-04-02 19:21 ` [dpdk-dev] [PATCH 3/4] common/mlx5: refactor memory management codes Vu Pham
2020-04-02 19:21 ` [dpdk-dev] [PATCH 4/4] net/mlx5: modify net PMD to use common memory management driver Vu Pham
2020-04-07 16:48 ` [dpdk-dev] [PATCH v2 0/4] refactor multi-process IPC and memory management codes to common driver Vu Pham
2020-04-07 16:48 ` [dpdk-dev] [PATCH v2 1/4] common/mlx5: refactor MP IPC handling " Vu Pham
2020-04-08 9:05 ` Slava Ovsiienko
2020-04-07 16:48 ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify net pmd to use common multi-process APIs Vu Pham
2020-04-07 16:48 ` [dpdk-dev] [PATCH v2 3/4] common/mlx5: refactor memory management codes Vu Pham
2020-04-07 16:48 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: modify net pmd to use common MR driver Vu Pham
2020-04-07 17:00 ` [dpdk-dev] [PATCH v3 0/4] refactor multi-process IPC and memory management codes to common driver Vu Pham
2020-04-07 17:00 ` [dpdk-dev] [PATCH v3 1/4] common/mlx5: refactor multi-process IPC handling " Vu Pham
2020-04-08 9:05 ` Slava Ovsiienko
2020-04-07 17:00 ` [dpdk-dev] [PATCH v3 2/4] net/mlx5: modify net PMD to use common multi-process APIs Vu Pham
2020-04-08 9:05 ` Slava Ovsiienko
2020-04-07 17:00 ` [dpdk-dev] [PATCH v3 3/4] common/mlx5: refactor memory management codes Vu Pham
2020-04-08 9:04 ` Slava Ovsiienko [this message]
2020-04-07 17:00 ` [dpdk-dev] [PATCH v3 4/4] net/mlx5: modify net PMD to use common MR driver Vu Pham
2020-04-08 9:06 ` Slava Ovsiienko
2020-04-13 21:17 ` [dpdk-dev] [PATCH v4 0/2] refactor multi-process IPC and memory management codes to common driver Vu Pham
2020-04-13 21:17 ` [dpdk-dev] [PATCH v4 1/2] common/mlx5: refactor multi-process IPC handling " Vu Pham
2020-04-14 7:26 ` Slava Ovsiienko
2020-04-13 21:17 ` [dpdk-dev] [PATCH v4 2/2] common/mlx5: refactor memory management codes Vu Pham
2020-04-14 7:27 ` Slava Ovsiienko
2020-04-15 9:30 ` [dpdk-dev] [PATCH v4 0/2] refactor multi-process IPC and memory management codes to common driver Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM4PR05MB32652EE4B9324560E4170210D2C00@AM4PR05MB3265.eurprd05.prod.outlook.com \
--to=viacheslavo@mellanox.com \
--cc=dev@dpdk.org \
--cc=matan@mellanox.com \
--cc=orika@mellanox.com \
--cc=rasland@mellanox.com \
--cc=vuhuong@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).