From: "Mattias Rönnblom" <hofors@lysator.liu.se>
To: Stephen Hemminger <stephen@networkplumber.org>, dev@dpdk.org
Cc: "Morten Brørup" <mb@smartsharesystems.com>,
"Mattias Rönnblom" <mattias.ronnblom@ericsson.com>,
"Yipeng Wang" <yipeng1.wang@intel.com>,
"Sameh Gobriel" <sameh.gobriel@intel.com>,
"Bruce Richardson" <bruce.richardson@intel.com>,
"Vladimir Medvedkin" <vladimir.medvedkin@intel.com>
Subject: Re: [RFC 3/3] hash: add support for common small key sizes
Date: Fri, 22 Aug 2025 09:19:45 +0200 [thread overview]
Message-ID: <5427c6f3-4446-4ee3-909e-5f2925d2b286@lysator.liu.se> (raw)
In-Reply-To: <20250821203646.133506-4-stephen@networkplumber.org>
On 2025-08-21 22:35, Stephen Hemminger wrote:
> Add new compare functions for common small key sizes.
>
> Bugzilla ID: 1775
> Suggested-by: Morten Brørup <mb@smartsharesystems.com>
> Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> lib/hash/rte_cuckoo_hash.c | 54 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 54 insertions(+)
>
> diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
> index 3212695d92..825889c320 100644
> --- a/lib/hash/rte_cuckoo_hash.c
> +++ b/lib/hash/rte_cuckoo_hash.c
> @@ -49,6 +49,11 @@ RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO);
> */
> enum cmp_jump_table_case {
> KEY_CUSTOM = 0,
> + KEY_2_BYTES,
> + KEY_4_BYTES,
> + KEY_6_BYTES,
> + KEY_8_BYTES,
> + KEY_12_BYTES,
> KEY_16_BYTES,
> KEY_32_BYTES,
> KEY_48_BYTES,
> @@ -86,6 +91,50 @@ rte_hash_k32_cmp_eq(const void *key1, const void *key2, size_t key_len)
> }
> #endif
>
> +static inline int
> +rte_hash_k2_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> + const uint16_t *k1 = key1;
> + const uint16_t *k2 = key2;
> +
What we do now is to require the keys are 16-bit aligned (which wasn't
the case before).
You could
uint16_t k1;
memcpy(&k1, key1, sizeof(uint16_t));
instead.
Would generate the same code, but be safe from any future alignment issues.
Anyway, maybe it's safe to assume the keys are aligned, so this is not
an issue.
> + return k1[0] ^ k2[0];
> +}
Haven't you implemented "neq" rather than "eq" here? If the keys are
equal, the result is 0. Should be != 0.
Would it be worth adding a comment like "use XOR to make this
branch-free"? It may not be obvious to all readers.
That said, I’m not sure this trick will actually change the generated
object code - especially if the result of the eq function is still used
in a conditional afterward. Anyway, keeping it seems like a good
conservative approach.
> +
> +static inline int
> +rte_hash_k4_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> + const uint32_t *k1 = key1;
> + const uint32_t *k2 = key2;
> +
> + return k1[0] ^ k2[0];
> +}
> +
> +static inline int
> +rte_hash_k6_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> + const uint16_t *k1 = key1;
> + const uint16_t *k2 = key2;
> +
> + return (k1[0] ^ k2[0]) | (k1[1] ^ k2[1]) | (k1[2] ^ k2[2]);
> +}
> +
> +static inline int
> +rte_hash_k8_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> + const uint64_t *k1 = key1;
> + const uint64_t *k2 = key2;
> +
> + return k1[0] ^ k2[0];
> +}
> +
> +static inline int
> +rte_hash_k12_cmp_eq(const void *key1, const void *key2, size_t key_len __rte_unused)
> +{
> + const uint32_t *k1 = key1;
> + const uint32_t *k2 = key2;
> +
> + return (k1[0] ^ k2[0]) | (k1[1] ^ k2[1]) | (k1[2] ^ k2[2]);
> +}
>
> static inline int
> rte_hash_k48_cmp_eq(const void *key1, const void *key2, size_t key_len)
> @@ -228,6 +277,11 @@ void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
> */
> static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
> [KEY_CUSTOM] = NULL,
> + [KEY_2_BYTES] = rte_hash_k2_cmp_eq,
> + [KEY_4_BYTES] = rte_hash_k4_cmp_eq,
> + [KEY_6_BYTES] = rte_hash_k6_cmp_eq,
> + [KEY_8_BYTES] = rte_hash_k8_cmp_eq,
> + [KEY_12_BYTES] = rte_hash_k12_cmp_eq,
> [KEY_16_BYTES] = rte_hash_k16_cmp_eq,
> [KEY_32_BYTES] = rte_hash_k32_cmp_eq,
> [KEY_48_BYTES] = rte_hash_k48_cmp_eq,
next prev parent reply other threads:[~2025-08-22 7:19 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-21 20:35 [RFC 0/3] hash: optimize compare logic Stephen Hemminger
2025-08-21 20:35 ` [RFC 1/3] hash: move table of hash compare functions out of header Stephen Hemminger
2025-08-22 9:05 ` Morten Brørup
2025-08-21 20:35 ` [RFC 2/3] hash: reduce architecture special cases Stephen Hemminger
2025-08-22 9:20 ` Morten Brørup
2025-08-21 20:35 ` [RFC 3/3] hash: add support for common small key sizes Stephen Hemminger
2025-08-22 7:19 ` Mattias Rönnblom [this message]
2025-08-22 9:50 ` Morten Brørup
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5427c6f3-4446-4ee3-909e-5f2925d2b286@lysator.liu.se \
--to=hofors@lysator.liu.se \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=mattias.ronnblom@ericsson.com \
--cc=mb@smartsharesystems.com \
--cc=sameh.gobriel@intel.com \
--cc=stephen@networkplumber.org \
--cc=vladimir.medvedkin@intel.com \
--cc=yipeng1.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).