DPDK patches and discussions
 help / color / mirror / Atom feed
From: mstolarchuk <mike.stolarchuk@bigswitch.com>
To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com
Cc: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 6/6] static variable whouldn't be used to compute recursion level
Date: Fri, 18 Aug 2017 13:19:19 -0700	[thread overview]
Message-ID: <1503087559-103390-1-git-send-email-mike.stolarchuk@bigswitch.com> (raw)

the 'static nr_pushes' variable creates a single instance of the value
across multiple invocations of the same procedure.   In the case of
a numa machine, this variable is then associated with one specifci numa
node, degrading performance.

but the more troublesome issue iw when two unrelated threads
are inserting into the hash, each of these threads treat
this static as their own property.

Signed-off-by: mstolarchuk <mike.stolarchuk@bigswitch.com>
---
 lib/librte_hash/rte_cuckoo_hash.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c
index 946252f..4567eb7 100644
--- a/lib/librte_hash/rte_cuckoo_hash.c
+++ b/lib/librte_hash/rte_cuckoo_hash.c
@@ -417,9 +417,10 @@ struct rte_hash *
 
 /* Search for an entry that can be pushed to its alternative location */
 static inline int
-make_space_bucket(const struct rte_hash *h, struct rte_hash_bucket *bkt)
+__make_space_bucket(const struct rte_hash *h,
+	struct rte_hash_bucket *bkt,
+	unsigned int *nr_pushes)
 {
-	static unsigned int nr_pushes;
 	unsigned i, j;
 	int ret;
 	uint32_t next_bucket_idx;
@@ -456,15 +457,15 @@ struct rte_hash *
 			break;
 
 	/* All entries have been pushed, so entry cannot be added */
-	if (i == RTE_HASH_BUCKET_ENTRIES || nr_pushes > RTE_HASH_MAX_PUSHES)
+	if (i == RTE_HASH_BUCKET_ENTRIES || nr_pushes[0] > RTE_HASH_MAX_PUSHES)
 		return -ENOSPC;
 
 	/* Set flag to indicate that this entry is going to be pushed */
 	bkt->flag[i] = 1;
 
-	nr_pushes++;
+	nr_pushes[0]++;
 	/* Need room in alternative bucket to insert the pushed entry */
-	ret = make_space_bucket(h, next_bkt[i]);
+	ret = __make_space_bucket(h, next_bkt[i], nr_pushes);
 	/*
 	 * After recursive function.
 	 * Clear flags and insert the pushed entry
@@ -472,7 +473,7 @@ struct rte_hash *
 	 * or return error
 	 */
 	bkt->flag[i] = 0;
-	nr_pushes = 0;
+	nr_pushes[0] = 0;
 	if (ret >= 0) {
 		next_bkt[i]->sig_alt[ret] = bkt->sig_current[i];
 		next_bkt[i]->sig_current[ret] = bkt->sig_alt[i];
@@ -483,6 +484,14 @@ struct rte_hash *
 
 }
 
+static inline int
+make_space_bucket(const struct rte_hash *h, struct rte_hash_bucket *bkt)
+{
+	unsigned int nr_pushes = 0;
+
+	return __make_space_bucket(h, bkt, &nr_pushes);
+}
+
 /*
  * Function called to enqueue back an index in the cache/ring,
  * as slot has not being used and it can be used in the
-- 
1.9.1

                 reply	other threads:[~2017-08-18 20:19 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1503087559-103390-1-git-send-email-mike.stolarchuk@bigswitch.com \
    --to=mike.stolarchuk@bigswitch.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=pablo.de.lara.guarch@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).