From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id D0A78A0679 for ; Tue, 2 Apr 2019 01:08:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 015004C93; Tue, 2 Apr 2019 01:08:48 +0200 (CEST) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id F01545A for ; Tue, 2 Apr 2019 01:08:44 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6923BEBD; Mon, 1 Apr 2019 16:08:44 -0700 (PDT) Received: from dp6132.austin.arm.com (dp6132.austin.arm.com [10.118.12.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EE0C63F68F; Mon, 1 Apr 2019 16:08:43 -0700 (PDT) From: Dharmik Thakkar To: Yipeng Wang , Sameh Gobriel , Bruce Richardson , Pablo de Lara , John McNamara , Marko Kovacevic Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, Dharmik Thakkar Date: Mon, 1 Apr 2019 23:08:29 +0000 Message-Id: <20190401230830.17931-2-dharmik.thakkar@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190401230830.17931-1-dharmik.thakkar@arm.com> References: <20190401221836.16599-1-dharmik.thakkar@arm.com> <20190401230830.17931-1-dharmik.thakkar@arm.com> Subject: [dpdk-dev] [PATCH v4 1/2] hash: add lock free support for extendable bucket X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190401230829.0dpLVbVkKQpEDMUyEqys3a-mc4yxSltAFXsZMxTFIUA@z> This patch enables lock-free read-write concurrency support for extendable bucket feature. Suggested-by: Honnappa Nagarahalli Signed-off-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Reviewed-by: Gavin Hu Reviewed-by: Honnappa Nagarahalli Acked-by: Yipeng Wang --- v3: - Update lib/librte_hash/rte_cuckoo_hash.c (Yipeng) - Add Acked-by tag. --- v2: - Update doc (Yipeng) - Update lib/librte_hash/rte_cuckoo_hash.c (Yipeng) --- doc/guides/prog_guide/hash_lib.rst | 6 +- lib/librte_hash/rte_cuckoo_hash.c | 157 ++++++++++++++++++++--------- lib/librte_hash/rte_cuckoo_hash.h | 7 ++ 3 files changed, 117 insertions(+), 53 deletions(-) diff --git a/doc/guides/prog_guide/hash_lib.rst b/doc/guides/prog_guide/hash_lib.rst index 85a6edfa8b16..d06c7de2ead1 100644 --- a/doc/guides/prog_guide/hash_lib.rst +++ b/doc/guides/prog_guide/hash_lib.rst @@ -108,9 +108,9 @@ Extendable Bucket Functionality support An extra flag is used to enable this functionality (flag is not set by default). When the (RTE_HASH_EXTRA_FLAGS_EXT_TABLE) is set and in the very unlikely case due to excessive hash collisions that a key has failed to be inserted, the hash table bucket is extended with a linked list to insert these failed keys. This feature is important for the workloads (e.g. telco workloads) that need to insert up to 100% of the -hash table size and can't tolerate any key insertion failure (even if very few). Currently the extendable bucket is not supported -with the lock-free concurrency implementation (RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF). - +hash table size and can't tolerate any key insertion failure (even if very few). +Please note that with the 'lock free read/write concurrency' flag enabled, users need to call 'rte_hash_free_key_with_position' API in order to free the empty buckets and +deleted keys, to maintain the 100% capacity guarantee. Implementation Details (non Extendable Bucket Case) --------------------------------------------------- diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index c01489ba5193..8213dbf5f782 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -140,6 +140,7 @@ rte_hash_create(const struct rte_hash_parameters *params) unsigned int readwrite_concur_support = 0; unsigned int writer_takes_lock = 0; unsigned int no_free_on_del = 0; + uint32_t *ext_bkt_to_free = NULL; uint32_t *tbl_chng_cnt = NULL; unsigned int readwrite_concur_lf_support = 0; @@ -170,15 +171,6 @@ rte_hash_create(const struct rte_hash_parameters *params) return NULL; } - if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF) && - (params->extra_flag & RTE_HASH_EXTRA_FLAGS_EXT_TABLE)) { - rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: extendable bucket " - "feature not supported with rw concurrency " - "lock free\n"); - return NULL; - } - /* Check extra flags field to check extra options. */ if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_TRANS_MEM_SUPPORT) hw_trans_mem_support = 1; @@ -302,6 +294,16 @@ rte_hash_create(const struct rte_hash_parameters *params) */ for (i = 1; i <= num_buckets; i++) rte_ring_sp_enqueue(r_ext, (void *)((uintptr_t) i)); + + if (readwrite_concur_lf_support) { + ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * + num_key_slots, 0); + if (ext_bkt_to_free == NULL) { + RTE_LOG(ERR, HASH, "ext bkt to free memory allocation " + "failed\n"); + goto err_unlock; + } + } } const uint32_t key_entry_size = @@ -393,6 +395,7 @@ rte_hash_create(const struct rte_hash_parameters *params) default_hash_func : params->hash_func; h->key_store = k; h->free_slots = r; + h->ext_bkt_to_free = ext_bkt_to_free; h->tbl_chng_cnt = tbl_chng_cnt; *h->tbl_chng_cnt = 0; h->hw_trans_mem_support = hw_trans_mem_support; @@ -443,6 +446,7 @@ rte_hash_create(const struct rte_hash_parameters *params) rte_free(buckets_ext); rte_free(k); rte_free(tbl_chng_cnt); + rte_free(ext_bkt_to_free); return NULL; } @@ -484,6 +488,7 @@ rte_hash_free(struct rte_hash *h) rte_free(h->buckets); rte_free(h->buckets_ext); rte_free(h->tbl_chng_cnt); + rte_free(h->ext_bkt_to_free); rte_free(h); rte_free(te); } @@ -799,7 +804,7 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, __atomic_store_n(h->tbl_chng_cnt, *h->tbl_chng_cnt + 1, __ATOMIC_RELEASE); - /* The stores to sig_alt and sig_current should not + /* The store to sig_current should not * move above the store to tbl_chng_cnt. */ __atomic_thread_fence(__ATOMIC_RELEASE); @@ -831,7 +836,7 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, __atomic_store_n(h->tbl_chng_cnt, *h->tbl_chng_cnt + 1, __ATOMIC_RELEASE); - /* The stores to sig_alt and sig_current should not + /* The store to sig_current should not * move above the store to tbl_chng_cnt. */ __atomic_thread_fence(__ATOMIC_RELEASE); @@ -1054,7 +1059,12 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, /* Check if slot is available */ if (likely(cur_bkt->key_idx[i] == EMPTY_SLOT)) { cur_bkt->sig_current[i] = short_sig; - cur_bkt->key_idx[i] = new_idx; + /* Store to signature should not leak after + * the store to key_idx + */ + __atomic_store_n(&cur_bkt->key_idx[i], + new_idx, + __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); return new_idx - 1; } @@ -1072,7 +1082,12 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, bkt_id = (uint32_t)((uintptr_t)ext_bkt_id) - 1; /* Use the first location of the new bucket */ (h->buckets_ext[bkt_id]).sig_current[0] = short_sig; - (h->buckets_ext[bkt_id]).key_idx[0] = new_idx; + /* Store to signature should not leak after + * the store to key_idx + */ + __atomic_store_n(&(h->buckets_ext[bkt_id]).key_idx[0], + new_idx, + __ATOMIC_RELEASE); /* Link the new bucket to sec bucket linked list */ last = rte_hash_get_last_bkt(sec_bkt); last->next = &h->buckets_ext[bkt_id]; @@ -1366,7 +1381,8 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) * empty slot. */ static inline void -__rte_hash_compact_ll(struct rte_hash_bucket *cur_bkt, int pos) { +__rte_hash_compact_ll(const struct rte_hash *h, + struct rte_hash_bucket *cur_bkt, int pos) { int i; struct rte_hash_bucket *last_bkt; @@ -1377,10 +1393,27 @@ __rte_hash_compact_ll(struct rte_hash_bucket *cur_bkt, int pos) { for (i = RTE_HASH_BUCKET_ENTRIES - 1; i >= 0; i--) { if (last_bkt->key_idx[i] != EMPTY_SLOT) { - cur_bkt->key_idx[pos] = last_bkt->key_idx[i]; cur_bkt->sig_current[pos] = last_bkt->sig_current[i]; + __atomic_store_n(&cur_bkt->key_idx[pos], + last_bkt->key_idx[i], + __ATOMIC_RELEASE); + if (h->readwrite_concur_lf_support) { + /* Inform the readers that the table has changed + * Since there is one writer, load acquire on + * tbl_chng_cnt is not required. + */ + __atomic_store_n(h->tbl_chng_cnt, + *h->tbl_chng_cnt + 1, + __ATOMIC_RELEASE); + /* The store to sig_current should + * not move above the store to tbl_chng_cnt. + */ + __atomic_thread_fence(__ATOMIC_RELEASE); + } last_bkt->sig_current[i] = NULL_SIGNATURE; - last_bkt->key_idx[i] = EMPTY_SLOT; + __atomic_store_n(&last_bkt->key_idx[i], + EMPTY_SLOT, + __ATOMIC_RELEASE); return; } } @@ -1449,7 +1482,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, /* look for key in primary bucket */ ret = search_and_remove(h, key, prim_bkt, short_sig, &pos); if (ret != -1) { - __rte_hash_compact_ll(prim_bkt, pos); + __rte_hash_compact_ll(h, prim_bkt, pos); last_bkt = prim_bkt->next; prev_bkt = prim_bkt; goto return_bkt; @@ -1461,7 +1494,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, FOR_EACH_BUCKET(cur_bkt, sec_bkt) { ret = search_and_remove(h, key, cur_bkt, short_sig, &pos); if (ret != -1) { - __rte_hash_compact_ll(cur_bkt, pos); + __rte_hash_compact_ll(h, cur_bkt, pos); last_bkt = sec_bkt->next; prev_bkt = sec_bkt; goto return_bkt; @@ -1488,11 +1521,24 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, } /* found empty bucket and recycle */ if (i == RTE_HASH_BUCKET_ENTRIES) { - prev_bkt->next = last_bkt->next = NULL; + prev_bkt->next = NULL; uint32_t index = last_bkt - h->buckets_ext + 1; - rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + /* Recycle the empty bkt if + * no_free_on_del is disabled. + */ + if (h->no_free_on_del) + /* Store index of an empty ext bkt to be recycled + * on calling rte_hash_del_xxx APIs. + * When lock free read-write concurrency is enabled, + * an empty ext bkt cannot be put into free list + * immediately (as readers might be using it still). + * Hence freeing of the ext bkt is piggy-backed to + * freeing of the key index. + */ + h->ext_bkt_to_free[ret] = index; + else + rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); } - __hash_rw_writer_unlock(h); return ret; } @@ -1545,6 +1591,14 @@ rte_hash_free_key_with_position(const struct rte_hash *h, /* Out of bounds */ if (position >= total_entries) return -EINVAL; + if (h->ext_table_support && h->readwrite_concur_lf_support) { + uint32_t index = h->ext_bkt_to_free[position]; + if (index) { + /* Recycle empty ext bkt to free list. */ + rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index); + h->ext_bkt_to_free[position] = 0; + } + } if (h->use_local_cache) { lcore_id = rte_lcore_id(); @@ -1855,6 +1909,9 @@ __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys, rte_prefetch0(secondary_bkt[i]); } + for (i = 0; i < num_keys; i++) + positions[i] = -ENOENT; + do { /* Load the table change counter before the lookup * starts. Acquire semantics will make sure that @@ -1899,7 +1956,6 @@ __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys, /* Compare keys, first hits in primary first */ for (i = 0; i < num_keys; i++) { - positions[i] = -ENOENT; while (prim_hitmask[i]) { uint32_t hit_index = __builtin_ctzl(prim_hitmask[i]) @@ -1972,6 +2028,35 @@ __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys, continue; } + /* all found, do not need to go through ext bkt */ + if (hits == ((1ULL << num_keys) - 1)) { + if (hit_mask != NULL) + *hit_mask = hits; + return; + } + /* need to check ext buckets for match */ + if (h->ext_table_support) { + for (i = 0; i < num_keys; i++) { + if ((hits & (1ULL << i)) != 0) + continue; + next_bkt = secondary_bkt[i]->next; + FOR_EACH_BUCKET(cur_bkt, next_bkt) { + if (data != NULL) + ret = search_one_bucket_lf(h, + keys[i], sig[i], + &data[i], cur_bkt); + else + ret = search_one_bucket_lf(h, + keys[i], sig[i], + NULL, cur_bkt); + if (ret != -1) { + positions[i] = ret; + hits |= 1ULL << i; + break; + } + } + } + } /* The loads of sig_current in compare_signatures * should not move below the load from tbl_chng_cnt. */ @@ -1988,34 +2073,6 @@ __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys, __ATOMIC_ACQUIRE); } while (cnt_b != cnt_a); - /* all found, do not need to go through ext bkt */ - if ((hits == ((1ULL << num_keys) - 1)) || !h->ext_table_support) { - if (hit_mask != NULL) - *hit_mask = hits; - __hash_rw_reader_unlock(h); - return; - } - - /* need to check ext buckets for match */ - for (i = 0; i < num_keys; i++) { - if ((hits & (1ULL << i)) != 0) - continue; - next_bkt = secondary_bkt[i]->next; - FOR_EACH_BUCKET(cur_bkt, next_bkt) { - if (data != NULL) - ret = search_one_bucket_lf(h, keys[i], - sig[i], &data[i], cur_bkt); - else - ret = search_one_bucket_lf(h, keys[i], - sig[i], NULL, cur_bkt); - if (ret != -1) { - positions[i] = ret; - hits |= 1ULL << i; - break; - } - } - } - if (hit_mask != NULL) *hit_mask = hits; } diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h index eacdaa8d4684..48c85c890712 100644 --- a/lib/librte_hash/rte_cuckoo_hash.h +++ b/lib/librte_hash/rte_cuckoo_hash.h @@ -210,6 +210,13 @@ struct rte_hash { rte_rwlock_t *readwrite_lock; /**< Read-write lock thread-safety. */ struct rte_hash_bucket *buckets_ext; /**< Extra buckets array */ struct rte_ring *free_ext_bkts; /**< Ring of indexes of free buckets */ + /* Stores index of an empty ext bkt to be recycled on calling + * rte_hash_del_xxx APIs. When lock free read-write concurrency is + * enabled, an empty ext bkt cannot be put into free list immediately + * (as readers might be using it still). Hence freeing of the ext bkt + * is piggy-backed to freeing of the key index. + */ + uint32_t *ext_bkt_to_free; uint32_t *tbl_chng_cnt; /**< Indicates if the hash table changed from last read. */ } __rte_cache_aligned; -- 2.17.1