From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9073A0524; Wed, 5 May 2021 21:13:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 40515410F4; Wed, 5 May 2021 21:13:27 +0200 (CEST) Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by mails.dpdk.org (Postfix) with ESMTP id EE8FC410F4 for ; Wed, 5 May 2021 21:13:25 +0200 (CEST) Received: by mail-lf1-f43.google.com with SMTP id h4so4220765lfv.0 for ; Wed, 05 May 2021 12:13:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8MzEmhWGbJ7B3Z5HtHP7ptO9xs62TrVYC8LYMFB1O5w=; b=1gpxzbExL31Nz/d7YIsteap/mn3jhYzoDmv4a3nwuZGExAqh5elm9vm/foIpPTfOCZ 7lvbFT8+9DB9SGFv6EJatSJNso75WNLIcqLkDAYzOCFw8P+Ro/SHY7VtfnnZJtk/b2NI QmQ+Yj8Z75dZf0+5ThGlITylkSM+sn7Kx3ca2R/ZM6Y9GtewB4y7yAxur0QLVmvJwcTW sf8+TiK3BW9g4D7i80GPgfeDSi5x1+DWk3GSi5arat4GpVQnxCmvZCqpVe0ylNYrr2GT ncr+e8XypqU5/NVQf2CW/28E6ZCCAm1uGL5LPG1Y42iaSNGWaSJSBinqKoAOTIB7Hc47 CECg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8MzEmhWGbJ7B3Z5HtHP7ptO9xs62TrVYC8LYMFB1O5w=; b=siiS3sJqnfntFmp2P6Xp/fwJLAWE1uBKtk3bS/Uqn5Dv3IGJX58phDq1vFX+wPShYg o1C/qz7XenRr5VIl1zw1pPbUK4a42nlWcapFTk+I6PV1xoPuOvTsoO4ohMjRwSZxyjAQ HsX99W7cEDL6BdfIGpB+0opx+8D0igLWpyT4BwiTlKMU66R0neCzcuI5fOHtHKyfNPoT bWYB1HdZFE086St5esJ0a6WKuK+F/9guKrek+bSoSVKorhsl4pTsASqdmXNR9l4p4Qko rPfdAij3eydvT7PGtX3W+T66TBCtNy5QWm5Oj/PCdFdZ+3cOrk5ieyvhrYRo5ZfzBola fLmw== X-Gm-Message-State: AOAM533NQ7dg9hhm5oSgaHZyHX4f78CtYtdIYue3UtUbc7lJSDn7BeTK m3HB9kpl6Eg2GHa4QhO73MZEpg== X-Google-Smtp-Source: ABdhPJxwrNXjCdmqYwk3cYFLhl/HN0dZ13md9V9St543i0AV7O4HBkuoMgBzfEPWwpL9ND1C2+Wp/w== X-Received: by 2002:a05:6512:2294:: with SMTP id f20mr237135lfu.85.1620242005463; Wed, 05 May 2021 12:13:25 -0700 (PDT) Received: from toster (89-73-146-138.dynamic.chello.pl. [89.73.146.138]) by smtp.gmail.com with ESMTPSA id u4sm29803lji.95.2021.05.05.12.13.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 May 2021 12:13:24 -0700 (PDT) Date: Wed, 5 May 2021 21:13:22 +0200 From: Stanislaw Kardach To: Vladimir Medvedkin Cc: dev@dpdk.org, konstantin.ananyev@intel.com, andrey.chilikin@intel.com, ray.kinsella@intel.com, yipeng1.wang@intel.com, sameh.gobriel@intel.com, bruce.richardson@intel.com, david.marchand@redhat.com Message-ID: <20210505191322.idkyj5q66ne7cz6q@toster> References: <1620137248-203174-1-git-send-email-vladimir.medvedkin@intel.com> <1620138304-203463-1-git-send-email-vladimir.medvedkin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1620138304-203463-1-git-send-email-vladimir.medvedkin@intel.com> Subject: Re: [dpdk-dev] [PATCH v2] hash: fix tuple adjustment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, May 04, 2021 at 03:25:04PM +0100, Vladimir Medvedkin wrote: > rte_thash_adjust_tuple() uses random to generate a new subtuple if > fn() callback reports about collision. In some cases random changes > the subtuple in a way that after complementary bits are applied the > original tuple is obtained. This patch replaces random with subtuple > increment. > > Fixes: 28ebff11c2dc ("hash: add predictable RSS") > Cc: vladimir.medvedkin@intel.com > > Signed-off-by: Vladimir Medvedkin > --- > lib/hash/rte_thash.c | 121 ++++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 100 insertions(+), 21 deletions(-) > > diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c > index 135a26d..58129df 100644 > --- a/lib/hash/rte_thash.c > +++ b/lib/hash/rte_thash.c > @@ -610,16 +610,91 @@ rte_thash_get_key(struct rte_thash_ctx *ctx) > return ctx->hash_key; > } > > +static inline uint8_t > +read_unaligned_byte(uint8_t *ptr, unsigned int len, unsigned int offset) > +{ > + uint8_t ret = 0; > + > + ret = ptr[offset / CHAR_BIT]; > + if (offset % CHAR_BIT) { > + ret <<= (offset % CHAR_BIT); > + ret |= ptr[(offset / CHAR_BIT) + 1] >> > + (CHAR_BIT - (offset % CHAR_BIT)); > + } > + > + return ret >> (CHAR_BIT - len); > +} > + > +static inline uint32_t > +read_unaligned_bits(uint8_t *ptr, int len, int offset) > +{ > + uint32_t ret = 0; > + > + len = RTE_MAX(len, 0); > + len = RTE_MIN(len, (int)(sizeof(uint32_t) * CHAR_BIT)); > + > + while (len > 0) { > + ret <<= CHAR_BIT; > + > + ret |= read_unaligned_byte(ptr, RTE_MIN(len, CHAR_BIT), > + offset); > + offset += CHAR_BIT; > + len -= CHAR_BIT; > + } > + > + return ret; > +} > + > +/* returns mask for len bits with given offset inside byte */ > +static inline uint8_t > +get_bits_mask(unsigned int len, unsigned int offset) > +{ > + unsigned int last_bit; > + > + offset %= CHAR_BIT; > + /* last bit within byte */ > + last_bit = RTE_MIN((unsigned int)CHAR_BIT, offset + len); > + > + return ((1 << (CHAR_BIT - offset)) - 1) ^ > + ((1 << (CHAR_BIT - last_bit)) - 1); > +} > + > +static inline void > +write_unaligned_byte(uint8_t *ptr, unsigned int len, > + unsigned int offset, uint8_t val) > +{ > + uint8_t tmp; > + > + tmp = ptr[offset / CHAR_BIT]; > + tmp &= ~get_bits_mask(len, offset); > + tmp |= ((val << (CHAR_BIT - len)) >> (offset % CHAR_BIT)); > + ptr[offset / CHAR_BIT] = tmp; > + if (((offset + len) / CHAR_BIT) != (offset / CHAR_BIT)) { > + int rest_len = (offset + len) % CHAR_BIT; > + tmp = ptr[(offset + len) / CHAR_BIT]; > + tmp &= ~get_bits_mask(rest_len, 0); > + tmp |= val << (CHAR_BIT - rest_len); > + ptr[(offset + len) / CHAR_BIT] = tmp; > + } > +} > + > static inline void > -xor_bit(uint8_t *ptr, uint32_t bit, uint32_t pos) > +write_unaligned_bits(uint8_t *ptr, int len, int offset, uint32_t val) > { > - uint32_t byte_idx = pos >> 3; > - uint32_t bit_idx = (CHAR_BIT - 1) - (pos & (CHAR_BIT - 1)); > uint8_t tmp; > + unsigned int part_len; > + > + len = RTE_MAX(len, 0); > + len = RTE_MIN(len, (int)(sizeof(uint32_t) * CHAR_BIT)); > > - tmp = ptr[byte_idx]; > - tmp ^= bit << bit_idx; > - ptr[byte_idx] = tmp; > + while (len > 0) { > + part_len = RTE_MIN(CHAR_BIT, len); > + tmp = (uint8_t)val & ((1 << part_len) - 1); > + write_unaligned_byte(ptr, part_len, > + offset + len - part_len, tmp); > + len -= CHAR_BIT; > + val >>= CHAR_BIT; > + } > } > > int > @@ -632,8 +707,10 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx, > uint32_t tmp_tuple[tuple_len / sizeof(uint32_t)]; > unsigned int i, j, ret = 0; > uint32_t hash, adj_bits; > - uint8_t bit; > const uint8_t *hash_key; > + uint32_t tmp; > + int offset; > + int tmp_len; > > if ((ctx == NULL) || (h == NULL) || (tuple == NULL) || > (tuple_len % sizeof(uint32_t) != 0) || (attempts <= 0)) > @@ -641,6 +718,8 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx, > > hash_key = rte_thash_get_key(ctx); > > + attempts = RTE_MIN(attempts, 1U << (h->tuple_len - ctx->reta_sz_log)); > + > for (i = 0; i < attempts; i++) { > for (j = 0; j < (tuple_len / 4); j++) > tmp_tuple[j] = > @@ -651,14 +730,12 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx, > > /* > * Hint: LSB of adj_bits corresponds to > - * offset + len bit of tuple > + * offset + len bit of the subtuple > */ > - for (j = 0; j < sizeof(uint32_t) * CHAR_BIT; j++) { > - bit = (adj_bits >> j) & 0x1; > - if (bit) > - xor_bit(tuple, bit, h->tuple_offset + > - h->tuple_len - 1 - j); > - } > + offset = h->tuple_offset + h->tuple_len - ctx->reta_sz_log; > + tmp = read_unaligned_bits(tuple, ctx->reta_sz_log, offset); > + tmp ^= adj_bits; > + write_unaligned_bits(tuple, ctx->reta_sz_log, offset, tmp); > > if (fn != NULL) { > ret = (fn(userdata, tuple)) ? 0 : -EEXIST; > @@ -666,13 +743,15 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx, > return 0; > else if (i < (attempts - 1)) { Small nit. This comment should be updated as the bits aren't random anymore, just incremented. > /* Update tuple with random bits */ > - for (j = 0; j < h->tuple_len; j++) { > - bit = rte_rand() & 0x1; > - if (bit) > - xor_bit(tuple, bit, > - h->tuple_offset + > - h->tuple_len - 1 - j); > - } > + tmp_len = RTE_MIN(sizeof(uint32_t) * CHAR_BIT, > + h->tuple_len - ctx->reta_sz_log); > + offset -= tmp_len; > + tmp = read_unaligned_bits(tuple, tmp_len, > + offset); > + tmp++; > + tmp &= (1 << tmp_len) - 1; > + write_unaligned_bits(tuple, tmp_len, offset, > + tmp); > } > } else > return 0; > -- > 2.7.4 > Makes the issue not visible on both x86 and RISC-V. Tested-by: Stanislaw Kardach Reviewed-by: Stanislaw Kardach -- Best Regards, Stanislaw Kardach