From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCA83A0C43; Thu, 21 Oct 2021 19:20:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 91E6E40142; Thu, 21 Oct 2021 19:20:40 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 0FDA24003E for ; Thu, 21 Oct 2021 19:20:38 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10144"; a="252594970" X-IronPort-AV: E=Sophos;i="5.87,170,1631602800"; d="scan'208";a="252594970" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2021 10:18:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,170,1631602800"; d="scan'208";a="445426225" Received: from silpixa00400072.ir.intel.com ([10.237.222.213]) by orsmga003.jf.intel.com with ESMTP; 21 Oct 2021 10:18:24 -0700 From: Vladimir Medvedkin To: dev@dpdk.org Cc: yipeng1.wang@intel.com, sameh.gobriel@intel.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Date: Thu, 21 Oct 2021 18:18:15 +0100 Message-Id: <1634836698-10864-3-git-send-email-vladimir.medvedkin@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1634836698-10864-1-git-send-email-vladimir.medvedkin@intel.com> References: <1634836698-10864-1-git-send-email-vladimir.medvedkin@intel.com> In-Reply-To: <1634754016-367978-1-git-send-email-vladimir.medvedkin@intel.com> References: <1634754016-367978-1-git-send-email-vladimir.medvedkin@intel.com> Subject: [dpdk-dev] [PATCH v4 2/5] hash: enable gfni thash implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch enables new GFNI Toeplitz hash in predictable RSS library. Signed-off-by: Vladimir Medvedkin Acked-by: Konstantin Ananyev --- lib/hash/rte_thash.c | 42 ++++++++++++++++++++++++++++++++++++++---- lib/hash/rte_thash.h | 19 +++++++++++++++++++ lib/hash/version.map | 1 + 3 files changed, 58 insertions(+), 4 deletions(-) diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index e605a6f..242d0ff 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -87,6 +87,8 @@ struct rte_thash_ctx { uint32_t reta_sz_log; /** < size of the RSS ReTa in bits */ uint32_t subtuples_nb; /** < number of subtuples */ uint32_t flags; + uint64_t *matrices; + /**< matrices used with rte_thash_gfni implementation */ uint8_t hash_key[0]; }; @@ -266,12 +268,28 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx->hash_key[i] = rte_rand(); } + if (rte_thash_gfni_supported()) { + ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t), + RTE_CACHE_LINE_SIZE); + if (ctx->matrices == NULL) { + RTE_LOG(ERR, HASH, "Cannot allocate matrices\n"); + rte_errno = ENOMEM; + goto free_ctx; + } + + rte_thash_complete_matrix(ctx->matrices, ctx->hash_key, + key_len); + } + te->data = (void *)ctx; TAILQ_INSERT_TAIL(thash_list, te, next); rte_mcfg_tailq_write_unlock(); return ctx; + +free_ctx: + rte_free(ctx); free_te: rte_free(te); exit: @@ -385,6 +403,10 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr, set_bit(ctx->hash_key, get_rev_bit_lfsr(lfsr), i); } + if (ctx->matrices != NULL) + rte_thash_complete_matrix(ctx->matrices, ctx->hash_key, + ctx->key_len); + return 0; } @@ -641,6 +663,12 @@ rte_thash_get_key(struct rte_thash_ctx *ctx) return ctx->hash_key; } +const uint64_t * +rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx) +{ + return ctx->matrices; +} + static inline uint8_t read_unaligned_byte(uint8_t *ptr, unsigned int len, unsigned int offset) { @@ -752,11 +780,17 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx, attempts = RTE_MIN(attempts, 1U << (h->tuple_len - ctx->reta_sz_log)); for (i = 0; i < attempts; i++) { - for (j = 0; j < (tuple_len / 4); j++) - tmp_tuple[j] = - rte_be_to_cpu_32(*(uint32_t *)&tuple[j * 4]); + if (ctx->matrices != NULL) + hash = rte_thash_gfni(ctx->matrices, tuple, tuple_len); + else { + for (j = 0; j < (tuple_len / 4); j++) + tmp_tuple[j] = + rte_be_to_cpu_32( + *(uint32_t *)&tuple[j * 4]); + + hash = rte_softrss(tmp_tuple, tuple_len / 4, hash_key); + } - hash = rte_softrss(tmp_tuple, tuple_len / 4, hash_key); adj_bits = rte_thash_get_complement(h, hash, desired_value); /* diff --git a/lib/hash/rte_thash.h b/lib/hash/rte_thash.h index a406be0..d12ab81 100644 --- a/lib/hash/rte_thash.h +++ b/lib/hash/rte_thash.h @@ -423,6 +423,25 @@ const uint8_t * rte_thash_get_key(struct rte_thash_ctx *ctx); /** + * Get a pointer to the toeplitz hash matrices contained in the context. + * These matrices could be used with fast toeplitz hash implementation if + * CPU supports GFNI. + * Matrices changes after each addition of a helper. + * + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @param ctx + * Thash context + * @return + * A pointer to the toeplitz hash key matrices on success + * NULL if GFNI is not supported. + */ +__rte_experimental +const uint64_t * +rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx); + +/** * Function prototype for the rte_thash_adjust_tuple * to check if adjusted tuple could be used. * Generally it is some kind of lookup function to check diff --git a/lib/hash/version.map b/lib/hash/version.map index cecf922..3eda695 100644 --- a/lib/hash/version.map +++ b/lib/hash/version.map @@ -43,6 +43,7 @@ EXPERIMENTAL { rte_thash_find_existing; rte_thash_free_ctx; rte_thash_get_complement; + rte_thash_get_gfni_matrices; rte_thash_get_helper; rte_thash_get_key; rte_thash_gfni_supported; -- 2.7.4