* [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position
@ 2019-04-30 9:03 bugzilla
2019-04-30 9:03 ` bugzilla
2019-05-01 3:33 ` Dharmik Thakkar
0 siblings, 2 replies; 4+ messages in thread
From: bugzilla @ 2019-04-30 9:03 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=261
Bug ID: 261
Summary: DPDK 18.11 bug on rte_hash_free_key_with_position
Product: DPDK
Version: 18.11
Hardware: All
OS: All
Status: CONFIRMED
Severity: normal
Priority: Normal
Component: other
Assignee: dev@dpdk.org
Reporter: zhongdahulinfan@163.com
Target Milestone: ---
First let's see the definition of rte_hash_free_key_with_position in DPDK
18.11, as shown bellow:
int __rte_experimental
rte_hash_free_key_with_position(const struct rte_hash *h,
const int32_t position)
{
RETURN_IF_TRUE(((h == NULL) || (position == EMPTY_SLOT)), -EINVAL);
unsigned int lcore_id, n_slots;
struct lcore_cache *cached_free_slots;
const int32_t total_entries = h->num_buckets * RTE_HASH_BUCKET_ENTRIES;
/* Out of bounds */
if (position >= total_entries)
return -EINVAL;
if (h->use_local_cache) {
lcore_id = rte_lcore_id();
cached_free_slots = &h->local_free_slots[lcore_id];
/* Cache full, need to free it. */
if (cached_free_slots->len == LCORE_CACHE_SIZE) {
/* Need to enqueue the free slots in global ring. */
n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
cached_free_slots->objs,
LCORE_CACHE_SIZE, NULL);
cached_free_slots->len -= n_slots;
}
/* Put index of new free slot in cache. */
cached_free_slots->objs[cached_free_slots->len] =
(void *)((uintptr_t)position);
cached_free_slots->len++;
} else {
rte_ring_sp_enqueue(h->free_slots,
(void *)((uintptr_t)position));
}
return 0;
}
There are two issues for this API.
First, the input parameter 'position' is the key index of the hash table, which
is returned by rte_hash_add_key_xxx or rte_hash_del_key_xxx. Take a glance look
of rte_hash_del_key_with_hash for example, we see that it returns key_idx - 1
if entry found and removed successfully. Hence rte_hash_free_key_with_position
is not correct while it enqueues position into free_slots directly. It must
increase position by one to get the right key index, before enqueues into
free_slots.
As comparision, remove_entry()enqueue key_idx directly, which is correct:
static inline void
remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i)
{
unsigned lcore_id, n_slots;
struct lcore_cache *cached_free_slots;
if (h->use_local_cache) {
lcore_id = rte_lcore_id();
cached_free_slots = &h->local_free_slots[lcore_id];
/* Cache full, need to free it. */
if (cached_free_slots->len == LCORE_CACHE_SIZE) {
/* Need to enqueue the free slots in global ring. */
n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
cached_free_slots->objs,
LCORE_CACHE_SIZE, NULL);
cached_free_slots->len -= n_slots;
}
/* Put index of new free slot in cache. */
cached_free_slots->objs[cached_free_slots->len] =
(void *)((uintptr_t)bkt->key_idx[i]);
cached_free_slots->len++;
} else {
rte_ring_sp_enqueue(h->free_slots,
(void *)((uintptr_t)bkt->key_idx[i]));
}
}
Second, computation of total_entries is not correct. This should be the total
number of key slots.The number of key slots is seen as rte_hash_create, say
(params->entries + (RTE_MAX_LCORE - 1) *(LCORE_CACHE_SIZE - 1) + 1) when
use_local_cache is true, else (params->entries + 1)
struct rte_hash *
rte_hash_create(const struct rte_hash_parameters *params)
{
...
if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) {
use_local_cache = 1;
writer_takes_lock = 1;
}
...
/* Store all keys and leave the first entry as a dummy entry for
lookup_bulk */
if (use_local_cache)
/*
* Increase number of slots by total number of indices
* that can be stored in the lcore caches
* except for the first cache
*/
num_key_slots = params->entries + (RTE_MAX_LCORE - 1) *
(LCORE_CACHE_SIZE - 1) + 1;
else
num_key_slots = params->entries + 1;
...
/* Populate free slots ring. Entry zero is reserved for key misses. */
for (i = 1; i < num_key_slots; i++)
rte_ring_sp_enqueue(r, (void *)((uintptr_t) i));
...
}
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position
2019-04-30 9:03 [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position bugzilla
@ 2019-04-30 9:03 ` bugzilla
2019-05-01 3:33 ` Dharmik Thakkar
1 sibling, 0 replies; 4+ messages in thread
From: bugzilla @ 2019-04-30 9:03 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=261
Bug ID: 261
Summary: DPDK 18.11 bug on rte_hash_free_key_with_position
Product: DPDK
Version: 18.11
Hardware: All
OS: All
Status: CONFIRMED
Severity: normal
Priority: Normal
Component: other
Assignee: dev@dpdk.org
Reporter: zhongdahulinfan@163.com
Target Milestone: ---
First let's see the definition of rte_hash_free_key_with_position in DPDK
18.11, as shown bellow:
int __rte_experimental
rte_hash_free_key_with_position(const struct rte_hash *h,
const int32_t position)
{
RETURN_IF_TRUE(((h == NULL) || (position == EMPTY_SLOT)), -EINVAL);
unsigned int lcore_id, n_slots;
struct lcore_cache *cached_free_slots;
const int32_t total_entries = h->num_buckets * RTE_HASH_BUCKET_ENTRIES;
/* Out of bounds */
if (position >= total_entries)
return -EINVAL;
if (h->use_local_cache) {
lcore_id = rte_lcore_id();
cached_free_slots = &h->local_free_slots[lcore_id];
/* Cache full, need to free it. */
if (cached_free_slots->len == LCORE_CACHE_SIZE) {
/* Need to enqueue the free slots in global ring. */
n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
cached_free_slots->objs,
LCORE_CACHE_SIZE, NULL);
cached_free_slots->len -= n_slots;
}
/* Put index of new free slot in cache. */
cached_free_slots->objs[cached_free_slots->len] =
(void *)((uintptr_t)position);
cached_free_slots->len++;
} else {
rte_ring_sp_enqueue(h->free_slots,
(void *)((uintptr_t)position));
}
return 0;
}
There are two issues for this API.
First, the input parameter 'position' is the key index of the hash table, which
is returned by rte_hash_add_key_xxx or rte_hash_del_key_xxx. Take a glance look
of rte_hash_del_key_with_hash for example, we see that it returns key_idx - 1
if entry found and removed successfully. Hence rte_hash_free_key_with_position
is not correct while it enqueues position into free_slots directly. It must
increase position by one to get the right key index, before enqueues into
free_slots.
As comparision, remove_entry()enqueue key_idx directly, which is correct:
static inline void
remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i)
{
unsigned lcore_id, n_slots;
struct lcore_cache *cached_free_slots;
if (h->use_local_cache) {
lcore_id = rte_lcore_id();
cached_free_slots = &h->local_free_slots[lcore_id];
/* Cache full, need to free it. */
if (cached_free_slots->len == LCORE_CACHE_SIZE) {
/* Need to enqueue the free slots in global ring. */
n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
cached_free_slots->objs,
LCORE_CACHE_SIZE, NULL);
cached_free_slots->len -= n_slots;
}
/* Put index of new free slot in cache. */
cached_free_slots->objs[cached_free_slots->len] =
(void *)((uintptr_t)bkt->key_idx[i]);
cached_free_slots->len++;
} else {
rte_ring_sp_enqueue(h->free_slots,
(void *)((uintptr_t)bkt->key_idx[i]));
}
}
Second, computation of total_entries is not correct. This should be the total
number of key slots.The number of key slots is seen as rte_hash_create, say
(params->entries + (RTE_MAX_LCORE - 1) *(LCORE_CACHE_SIZE - 1) + 1) when
use_local_cache is true, else (params->entries + 1)
struct rte_hash *
rte_hash_create(const struct rte_hash_parameters *params)
{
...
if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) {
use_local_cache = 1;
writer_takes_lock = 1;
}
...
/* Store all keys and leave the first entry as a dummy entry for
lookup_bulk */
if (use_local_cache)
/*
* Increase number of slots by total number of indices
* that can be stored in the lcore caches
* except for the first cache
*/
num_key_slots = params->entries + (RTE_MAX_LCORE - 1) *
(LCORE_CACHE_SIZE - 1) + 1;
else
num_key_slots = params->entries + 1;
...
/* Populate free slots ring. Entry zero is reserved for key misses. */
for (i = 1; i < num_key_slots; i++)
rte_ring_sp_enqueue(r, (void *)((uintptr_t) i));
...
}
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position
2019-04-30 9:03 [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position bugzilla
2019-04-30 9:03 ` bugzilla
@ 2019-05-01 3:33 ` Dharmik Thakkar
2019-05-01 3:33 ` Dharmik Thakkar
1 sibling, 1 reply; 4+ messages in thread
From: Dharmik Thakkar @ 2019-05-01 3:33 UTC (permalink / raw)
To: bugzilla; +Cc: dev, zhongdahulinfan, nd, Honnappa Nagarahalli
I am taking a look at this bug. Will update ASAP. Did you run any test case to detect the bug?
Thank you!
> On Apr 30, 2019, at 4:03 AM, bugzilla@dpdk.org wrote:
>
> https://bugs.dpdk.org/show_bug.cgi?id=261
>
> Bug ID: 261
> Summary: DPDK 18.11 bug on rte_hash_free_key_with_position
> Product: DPDK
> Version: 18.11
> Hardware: All
> OS: All
> Status: CONFIRMED
> Severity: normal
> Priority: Normal
> Component: other
> Assignee: dev@dpdk.org
> Reporter: zhongdahulinfan@163.com
> Target Milestone: ---
>
> First let's see the definition of rte_hash_free_key_with_position in DPDK
> 18.11, as shown bellow:
>
> int __rte_experimental
> rte_hash_free_key_with_position(const struct rte_hash *h,
> const int32_t position)
> {
> RETURN_IF_TRUE(((h == NULL) || (position == EMPTY_SLOT)), -EINVAL);
>
> unsigned int lcore_id, n_slots;
> struct lcore_cache *cached_free_slots;
> const int32_t total_entries = h->num_buckets * RTE_HASH_BUCKET_ENTRIES;
>
> /* Out of bounds */
> if (position >= total_entries)
> return -EINVAL;
>
> if (h->use_local_cache) {
> lcore_id = rte_lcore_id();
> cached_free_slots = &h->local_free_slots[lcore_id];
> /* Cache full, need to free it. */
> if (cached_free_slots->len == LCORE_CACHE_SIZE) {
> /* Need to enqueue the free slots in global ring. */
> n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
> cached_free_slots->objs,
> LCORE_CACHE_SIZE, NULL);
> cached_free_slots->len -= n_slots;
> }
> /* Put index of new free slot in cache. */
> cached_free_slots->objs[cached_free_slots->len] =
> (void *)((uintptr_t)position);
> cached_free_slots->len++;
> } else {
> rte_ring_sp_enqueue(h->free_slots,
> (void *)((uintptr_t)position));
> }
>
> return 0;
> }
>
> There are two issues for this API.
>
> First, the input parameter 'position' is the key index of the hash table, which
> is returned by rte_hash_add_key_xxx or rte_hash_del_key_xxx. Take a glance look
> of rte_hash_del_key_with_hash for example, we see that it returns key_idx - 1
> if entry found and removed successfully. Hence rte_hash_free_key_with_position
> is not correct while it enqueues position into free_slots directly. It must
> increase position by one to get the right key index, before enqueues into
> free_slots.
>
> As comparision, remove_entry()enqueue key_idx directly, which is correct:
>
> static inline void
> remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i)
> {
> unsigned lcore_id, n_slots;
> struct lcore_cache *cached_free_slots;
>
> if (h->use_local_cache) {
> lcore_id = rte_lcore_id();
> cached_free_slots = &h->local_free_slots[lcore_id];
> /* Cache full, need to free it. */
> if (cached_free_slots->len == LCORE_CACHE_SIZE) {
> /* Need to enqueue the free slots in global ring. */
> n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
> cached_free_slots->objs,
> LCORE_CACHE_SIZE, NULL);
> cached_free_slots->len -= n_slots;
> }
> /* Put index of new free slot in cache. */
> cached_free_slots->objs[cached_free_slots->len] =
> (void *)((uintptr_t)bkt->key_idx[i]);
> cached_free_slots->len++;
> } else {
> rte_ring_sp_enqueue(h->free_slots,
> (void *)((uintptr_t)bkt->key_idx[i]));
> }
> }
>
> Second, computation of total_entries is not correct. This should be the total
> number of key slots.The number of key slots is seen as rte_hash_create, say
> (params->entries + (RTE_MAX_LCORE - 1) *(LCORE_CACHE_SIZE - 1) + 1) when
> use_local_cache is true, else (params->entries + 1)
>
> struct rte_hash *
> rte_hash_create(const struct rte_hash_parameters *params)
> {
> ...
> if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) {
> use_local_cache = 1;
> writer_takes_lock = 1;
> }
> ...
> /* Store all keys and leave the first entry as a dummy entry for
> lookup_bulk */
> if (use_local_cache)
> /*
> * Increase number of slots by total number of indices
> * that can be stored in the lcore caches
> * except for the first cache
> */
> num_key_slots = params->entries + (RTE_MAX_LCORE - 1) *
> (LCORE_CACHE_SIZE - 1) + 1;
> else
> num_key_slots = params->entries + 1;
> ...
> /* Populate free slots ring. Entry zero is reserved for key misses. */
> for (i = 1; i < num_key_slots; i++)
> rte_ring_sp_enqueue(r, (void *)((uintptr_t) i));
> ...
> }
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position
2019-05-01 3:33 ` Dharmik Thakkar
@ 2019-05-01 3:33 ` Dharmik Thakkar
0 siblings, 0 replies; 4+ messages in thread
From: Dharmik Thakkar @ 2019-05-01 3:33 UTC (permalink / raw)
To: bugzilla; +Cc: dev, zhongdahulinfan, nd, Honnappa Nagarahalli
I am taking a look at this bug. Will update ASAP. Did you run any test case to detect the bug?
Thank you!
> On Apr 30, 2019, at 4:03 AM, bugzilla@dpdk.org wrote:
>
> https://bugs.dpdk.org/show_bug.cgi?id=261
>
> Bug ID: 261
> Summary: DPDK 18.11 bug on rte_hash_free_key_with_position
> Product: DPDK
> Version: 18.11
> Hardware: All
> OS: All
> Status: CONFIRMED
> Severity: normal
> Priority: Normal
> Component: other
> Assignee: dev@dpdk.org
> Reporter: zhongdahulinfan@163.com
> Target Milestone: ---
>
> First let's see the definition of rte_hash_free_key_with_position in DPDK
> 18.11, as shown bellow:
>
> int __rte_experimental
> rte_hash_free_key_with_position(const struct rte_hash *h,
> const int32_t position)
> {
> RETURN_IF_TRUE(((h == NULL) || (position == EMPTY_SLOT)), -EINVAL);
>
> unsigned int lcore_id, n_slots;
> struct lcore_cache *cached_free_slots;
> const int32_t total_entries = h->num_buckets * RTE_HASH_BUCKET_ENTRIES;
>
> /* Out of bounds */
> if (position >= total_entries)
> return -EINVAL;
>
> if (h->use_local_cache) {
> lcore_id = rte_lcore_id();
> cached_free_slots = &h->local_free_slots[lcore_id];
> /* Cache full, need to free it. */
> if (cached_free_slots->len == LCORE_CACHE_SIZE) {
> /* Need to enqueue the free slots in global ring. */
> n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
> cached_free_slots->objs,
> LCORE_CACHE_SIZE, NULL);
> cached_free_slots->len -= n_slots;
> }
> /* Put index of new free slot in cache. */
> cached_free_slots->objs[cached_free_slots->len] =
> (void *)((uintptr_t)position);
> cached_free_slots->len++;
> } else {
> rte_ring_sp_enqueue(h->free_slots,
> (void *)((uintptr_t)position));
> }
>
> return 0;
> }
>
> There are two issues for this API.
>
> First, the input parameter 'position' is the key index of the hash table, which
> is returned by rte_hash_add_key_xxx or rte_hash_del_key_xxx. Take a glance look
> of rte_hash_del_key_with_hash for example, we see that it returns key_idx - 1
> if entry found and removed successfully. Hence rte_hash_free_key_with_position
> is not correct while it enqueues position into free_slots directly. It must
> increase position by one to get the right key index, before enqueues into
> free_slots.
>
> As comparision, remove_entry()enqueue key_idx directly, which is correct:
>
> static inline void
> remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i)
> {
> unsigned lcore_id, n_slots;
> struct lcore_cache *cached_free_slots;
>
> if (h->use_local_cache) {
> lcore_id = rte_lcore_id();
> cached_free_slots = &h->local_free_slots[lcore_id];
> /* Cache full, need to free it. */
> if (cached_free_slots->len == LCORE_CACHE_SIZE) {
> /* Need to enqueue the free slots in global ring. */
> n_slots = rte_ring_mp_enqueue_burst(h->free_slots,
> cached_free_slots->objs,
> LCORE_CACHE_SIZE, NULL);
> cached_free_slots->len -= n_slots;
> }
> /* Put index of new free slot in cache. */
> cached_free_slots->objs[cached_free_slots->len] =
> (void *)((uintptr_t)bkt->key_idx[i]);
> cached_free_slots->len++;
> } else {
> rte_ring_sp_enqueue(h->free_slots,
> (void *)((uintptr_t)bkt->key_idx[i]));
> }
> }
>
> Second, computation of total_entries is not correct. This should be the total
> number of key slots.The number of key slots is seen as rte_hash_create, say
> (params->entries + (RTE_MAX_LCORE - 1) *(LCORE_CACHE_SIZE - 1) + 1) when
> use_local_cache is true, else (params->entries + 1)
>
> struct rte_hash *
> rte_hash_create(const struct rte_hash_parameters *params)
> {
> ...
> if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD) {
> use_local_cache = 1;
> writer_takes_lock = 1;
> }
> ...
> /* Store all keys and leave the first entry as a dummy entry for
> lookup_bulk */
> if (use_local_cache)
> /*
> * Increase number of slots by total number of indices
> * that can be stored in the lcore caches
> * except for the first cache
> */
> num_key_slots = params->entries + (RTE_MAX_LCORE - 1) *
> (LCORE_CACHE_SIZE - 1) + 1;
> else
> num_key_slots = params->entries + 1;
> ...
> /* Populate free slots ring. Entry zero is reserved for key misses. */
> for (i = 1; i < num_key_slots; i++)
> rte_ring_sp_enqueue(r, (void *)((uintptr_t) i));
> ...
> }
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-05-01 3:33 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-30 9:03 [dpdk-dev] [Bug 261] DPDK 18.11 bug on rte_hash_free_key_with_position bugzilla
2019-04-30 9:03 ` bugzilla
2019-05-01 3:33 ` Dharmik Thakkar
2019-05-01 3:33 ` Dharmik Thakkar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).