From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29506A0C4B; Tue, 2 Nov 2021 19:38:52 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03B7641130; Tue, 2 Nov 2021 19:38:38 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id D69CB4111D for ; Tue, 2 Nov 2021 19:38:35 +0100 (CET) X-IronPort-AV: E=McAfee;i="6200,9189,10156"; a="211397034" X-IronPort-AV: E=Sophos;i="5.87,203,1631602800"; d="scan'208";a="211397034" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Nov 2021 11:38:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,203,1631602800"; d="scan'208";a="489227704" Received: from silpixa00400072.ir.intel.com ([10.237.222.213]) by orsmga007.jf.intel.com with ESMTP; 02 Nov 2021 11:38:33 -0700 From: Vladimir Medvedkin To: dev@dpdk.org Cc: yipeng1.wang@intel.com, sameh.gobriel@intel.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, thomas@monjalon.net Date: Tue, 2 Nov 2021 18:38:24 +0000 Message-Id: <1635878305-102888-4-git-send-email-vladimir.medvedkin@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1635878305-102888-1-git-send-email-vladimir.medvedkin@intel.com> References: <1635878305-102888-1-git-send-email-vladimir.medvedkin@intel.com> In-Reply-To: <1630944239-363648-1-git-send-email-vladimir.medvedkin@intel.com> References: <1630944239-363648-1-git-send-email-vladimir.medvedkin@intel.com> Subject: [dpdk-dev] [PATCH v8 3/4] hash: enable gfni thash implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enables new GFNI Toeplitz hash in predictable RSS library. Signed-off-by: Vladimir Medvedkin Acked-by: Konstantin Ananyev --- lib/hash/rte_thash.c | 42 ++++++++++++++++++++++++++++++++++++++---- lib/hash/rte_thash.h | 19 +++++++++++++++++++ lib/hash/version.map | 1 + 3 files changed, 58 insertions(+), 4 deletions(-) diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index 9d66a5d..6945a0a 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -87,6 +87,8 @@ struct rte_thash_ctx { uint32_t reta_sz_log; /** < size of the RSS ReTa in bits */ uint32_t subtuples_nb; /** < number of subtuples */ uint32_t flags; + uint64_t *matrices; + /**< matrices used with rte_thash_gfni implementation */ uint8_t hash_key[0]; }; @@ -267,12 +269,28 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx->hash_key[i] = rte_rand(); } + if (rte_thash_gfni_supported()) { + ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t), + RTE_CACHE_LINE_SIZE); + if (ctx->matrices == NULL) { + RTE_LOG(ERR, HASH, "Cannot allocate matrices\n"); + rte_errno = ENOMEM; + goto free_ctx; + } + + rte_thash_complete_matrix(ctx->matrices, ctx->hash_key, + key_len); + } + te->data = (void *)ctx; TAILQ_INSERT_TAIL(thash_list, te, next); rte_mcfg_tailq_write_unlock(); return ctx; + +free_ctx: + rte_free(ctx); free_te: rte_free(te); exit: @@ -386,6 +404,10 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr, set_bit(ctx->hash_key, get_rev_bit_lfsr(lfsr), i); } + if (ctx->matrices != NULL) + rte_thash_complete_matrix(ctx->matrices, ctx->hash_key, + ctx->key_len); + return 0; } @@ -642,6 +664,12 @@ rte_thash_get_key(struct rte_thash_ctx *ctx) return ctx->hash_key; } +const uint64_t * +rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx) +{ + return ctx->matrices; +} + static inline uint8_t read_unaligned_byte(uint8_t *ptr, unsigned int len, unsigned int offset) { @@ -753,11 +781,17 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx, attempts = RTE_MIN(attempts, 1U << (h->tuple_len - ctx->reta_sz_log)); for (i = 0; i < attempts; i++) { - for (j = 0; j < (tuple_len / 4); j++) - tmp_tuple[j] = - rte_be_to_cpu_32(*(uint32_t *)&tuple[j * 4]); + if (ctx->matrices != NULL) + hash = rte_thash_gfni(ctx->matrices, tuple, tuple_len); + else { + for (j = 0; j < (tuple_len / 4); j++) + tmp_tuple[j] = + rte_be_to_cpu_32( + *(uint32_t *)&tuple[j * 4]); + + hash = rte_softrss(tmp_tuple, tuple_len / 4, hash_key); + } - hash = rte_softrss(tmp_tuple, tuple_len / 4, hash_key); adj_bits = rte_thash_get_complement(h, hash, desired_value); /* diff --git a/lib/hash/rte_thash.h b/lib/hash/rte_thash.h index 40146cf..c11ca0d 100644 --- a/lib/hash/rte_thash.h +++ b/lib/hash/rte_thash.h @@ -419,6 +419,25 @@ const uint8_t * rte_thash_get_key(struct rte_thash_ctx *ctx); /** + * Get a pointer to the toeplitz hash matrices contained in the context. + * These matrices could be used with fast toeplitz hash implementation if + * CPU supports GFNI. + * Matrices changes after each addition of a helper. + * + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @param ctx + * Thash context + * @return + * A pointer to the toeplitz hash key matrices on success + * NULL if GFNI is not supported. + */ +__rte_experimental +const uint64_t * +rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx); + +/** * Function prototype for the rte_thash_adjust_tuple * to check if adjusted tuple could be used. * Generally it is some kind of lookup function to check diff --git a/lib/hash/version.map b/lib/hash/version.map index 153ab87..705c3f3 100644 --- a/lib/hash/version.map +++ b/lib/hash/version.map @@ -49,5 +49,6 @@ EXPERIMENTAL { #added in 21.11 rte_thash_complete_matrix; + rte_thash_get_gfni_matrices; rte_thash_gfni_supported; }; -- 2.7.4