From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 541D2A00C4; Thu, 23 Apr 2020 15:31:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 363FB1C1F7; Thu, 23 Apr 2020 15:31:54 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 39B631B6B4 for ; Thu, 23 Apr 2020 15:31:52 +0200 (CEST) IronPort-SDR: FhCpJRj8eOEooM0dKEivud48AJ9zSoRF/Gjlk8+WTsSc1C31IsCx9Llsa9byJhhSBfpZAYHPH1 LFItRNAfGGNA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2020 06:31:51 -0700 IronPort-SDR: wXHwTrs9pJHitHsvHQrFpOLlLOG0k8VC2QKFsFvgDW0iPa1sKoWt8o4u5C1ebW8OCBE7iSkqAG sdUBhWrgQmmA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,307,1583222400"; d="scan'208";a="244861309" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga007.jf.intel.com with ESMTP; 23 Apr 2020 06:31:50 -0700 Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 23 Apr 2020 06:31:22 -0700 Received: from FMSEDG002.ED.cps.intel.com (10.1.192.134) by FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 23 Apr 2020 06:31:22 -0700 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.102) by edgegateway.intel.com (192.55.55.69) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 23 Apr 2020 06:31:22 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V6NsAAIRRnqpwQtBXJ4I/3dr1cmxteiUZIMg2jmqThBuHb6R7xKQRBesmlFAA/+0tCV4KqyJlrQxf+jtYGBJ/PrLt21TBLVunNn+ergFIRP+vFySIzYsWmV5ZLp6HKeJeTHGdwwH3/8h5zwmsZ6tPq9RTKyo/Uf73ts9jFdDOGfX/pN96GgYtrYbPNVh8ZiJBuFTV24TDddnlADj9Gh2WyyIjx47xId6vrWy9Svy9p+qf8P/Rnr5I9TJio7jOoturBBNCP86Ecl7+iyrveOUx4kp4SB9uuBNSlfdP0QWYzKYwxo+CwP4HVqYfdjeBqevBz1jd/KFmHPSATeHokspKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4VE6LbHrqJEPzygZGl+Frmbsq4SuE3vWa3MZuE6O62M=; b=NEsx+CPun5sDLDIA1603kDFCgLlugLRMeB1Ix273wt8H92EAIF063xauOb5plBSnqGgdoDP9uB5VZ6tESoMXTFvIxUcKKD8wApznCnDjQ0xrW6nE4ewqymfkjtV2F9kC3Bs99bpkPTOb4cIxEO6WzqJ9pwQx3TYgUA3pxe55yUHdu93ddVr3+rehWZl8LrKyEITGFTHLaFt6lcKKmBOs0yjuIW6ejU45WbdXqFprRFZRYszHSacHS3jzCGEixw6BExRpKoGkTXxVqZkzUBgqPZ9oErFkiNqtHx49DXNOtZ3+8LzWpnOs3FYs0BCSo0FGxKf7ZSiJp4RBqaQd9EA8Ww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4VE6LbHrqJEPzygZGl+Frmbsq4SuE3vWa3MZuE6O62M=; b=wSYjCEryuqKWvfusyndx6lJToEtaF4yRhMP0Pgu9Oy8ebjjGsMyISQ6wWTmpCKaq7KCRsSB3/DsTXo3AAgpAPMqD6A1BuEHBwpMQe0OzqxUJOVLOG3TVVBk0P1Kbz+23AVsLjE+YALRtg8NMV73byud9vh8YAtLMQGDATrurxBM= Received: from BYAPR11MB3301.namprd11.prod.outlook.com (2603:10b6:a03:7f::26) by BYAPR11MB3832.namprd11.prod.outlook.com (2603:10b6:a03:ff::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2921.27; Thu, 23 Apr 2020 13:31:21 +0000 Received: from BYAPR11MB3301.namprd11.prod.outlook.com ([fe80::f8cb:58cd:e958:fff4]) by BYAPR11MB3301.namprd11.prod.outlook.com ([fe80::f8cb:58cd:e958:fff4%6]) with mapi id 15.20.2937.020; Thu, 23 Apr 2020 13:31:20 +0000 From: "Ananyev, Konstantin" To: "Medvedkin, Vladimir" , "dev@dpdk.org" CC: "Wang, Yipeng1" , "Gobriel, Sameh" , "Richardson, Bruce" Thread-Topic: [PATCH v3 1/4] hash: add k32v64 hash library Thread-Index: AQHWE1I9Q9UjTF9g0Uu68AoTy/iqOKiGsi0Q Date: Thu, 23 Apr 2020 13:31:20 +0000 Message-ID: References: <0e767e9171c4e90d57ec06b50d6bf3b7d79828b1.1586974411.git.vladimir.medvedkin@intel.com> In-Reply-To: <0e767e9171c4e90d57ec06b50d6bf3b7d79828b1.1586974411.git.vladimir.medvedkin@intel.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.2.0.6 authentication-results: spf=none (sender IP is ) smtp.mailfrom=konstantin.ananyev@intel.com; x-originating-ip: [192.198.151.169] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 7c9fb47b-ce79-4ace-cd14-08d7e78a9c2c x-ms-traffictypediagnostic: BYAPR11MB3832: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:1122; x-forefront-prvs: 03827AF76E x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR11MB3301.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10019020)(376002)(396003)(136003)(346002)(366004)(39860400002)(8936002)(107886003)(186003)(478600001)(5660300002)(4326008)(26005)(6506007)(52536014)(316002)(71200400001)(33656002)(30864003)(55016002)(2906002)(110136005)(54906003)(9686003)(66946007)(7696005)(66476007)(8676002)(66446008)(66556008)(81156014)(86362001)(76116006)(64756008); DIR:OUT; SFP:1102; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: u2InJFAU6Ach56PFJCZBdO4NQ/+4Eb2N81yImfUerLxr0jm+O8y8VG2bKTkq+NCLt+fa5Tui1M0r363dzolCiuD6OvepNMb1M2ZsHTtvolxlWOmozqBhmqcjmGVrN+GAXRcQCQ6z4nowHLGyhhu+0kv09Uh5HQy/5+qwXgdHjw56nmOphPRzY8X7Fp0d9aLKISUMySEBEp3FvbiH6NWKX3xKZoI6tFzTZ5T3S9c60dlDJ5GVB9qesHsJupgj4fjro0RaoWHI+EPkfv1aWn7cNl7fM3uJZJfh4DIC/aEAYkp/UH2uEPfP2vgBQxotw5Opf4qTjhXWKfE+BA62CwkxhWtOnb745FKsJ4+78NlZNiBWd9/V5Aws3ANoJ8nXPZLQ4cGtAR+cUiGNxuyCienQUDvMJdqmSQew9a2/Ww/41PiUhgdqlR56O7NW/WtVrQdk x-ms-exchange-antispam-messagedata: V674/ZxLramWYcYSAQz0Y1niBZdrCEK+YpAcjm+iYuGWjt2+e4k4EUctlOdNhCVgHEA511MbbArFuiHMQntyLPzJTyhKaQKCnO6W31d4KI+YfXmgZzBWYZl6gQdwHBc8Gd3TRyAXQrlxdgVt/wqcoA== Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 7c9fb47b-ce79-4ace-cd14-08d7e78a9c2c X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Apr 2020 13:31:20.6060 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: jQduStpW9A8Qei9JOTfjduRdTswwgGP2fhmYl3nZERZXZRZLM+S5JKm9NeSfTJsbnl75So1ekxK0UHwG3tiEUr2KS4u1BHEDi4r9QAH5jsQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR11MB3832 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v3 1/4] hash: add k32v64 hash library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Vladimir, Apologies for late review. My comments below.=20 > K32V64 hash is a hash table that supports 32 bit keys and 64 bit values. > This table is hash function agnostic so user must provide > precalculated hash signature for add/delete/lookup operations. >=20 > Signed-off-by: Vladimir Medvedkin > --- >=20 > --- /dev/null > +++ b/lib/librte_hash/rte_k32v64_hash.c > @@ -0,0 +1,315 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > + > +#include > + > +#include > +#include > +#include > +#include > +#include > + > +#include > + > +TAILQ_HEAD(rte_k32v64_hash_list, rte_tailq_entry); > + > +static struct rte_tailq_elem rte_k32v64_hash_tailq =3D { > + .name =3D "RTE_K32V64_HASH", > +}; > + > +EAL_REGISTER_TAILQ(rte_k32v64_hash_tailq); > + > +#define VALID_KEY_MSK ((1 << RTE_K32V64_KEYS_PER_BUCKET) - 1) > + > +#ifdef CC_AVX512VL_SUPPORT > +int > +k32v64_hash_bulk_lookup_avx512vl(struct rte_k32v64_hash_table *table, > + uint32_t *keys, uint32_t *hashes, uint64_t *values, unsigned int n); > +#endif > + > +static int > +k32v64_hash_bulk_lookup(struct rte_k32v64_hash_table *table, uint32_t *k= eys, > + uint32_t *hashes, uint64_t *values, unsigned int n) > +{ > + int ret, cnt =3D 0; > + unsigned int i; > + > + if (unlikely((table =3D=3D NULL) || (keys =3D=3D NULL) || (hashes =3D= =3D NULL) || > + (values =3D=3D NULL))) > + return -EINVAL; > + > + for (i =3D 0; i < n; i++) { > + ret =3D rte_k32v64_hash_lookup(table, keys[i], hashes[i], > + &values[i]); > + if (ret =3D=3D 0) > + cnt++; > + } > + return cnt; > +} > + > +static rte_k32v64_hash_bulk_lookup_t > +get_lookup_bulk_fn(void) > +{ > +#ifdef CC_AVX512VL_SUPPORT > + if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F)) > + return k32v64_hash_bulk_lookup_avx512vl; > +#endif > + return k32v64_hash_bulk_lookup; > +} > + > +int > +rte_k32v64_hash_add(struct rte_k32v64_hash_table *table, uint32_t key, > + uint32_t hash, uint64_t value) > +{ > + uint32_t bucket; > + int i, idx, ret; > + uint8_t msk; > + struct rte_k32v64_ext_ent *tmp, *ent, *prev =3D NULL; > + > + if (table =3D=3D NULL) > + return -EINVAL; > + I think for add you also need to do update bucket.cnt at the start/end of updates (as you do for del).=20 =20 > + bucket =3D hash & table->bucket_msk; > + /* Search key in table. Update value if exists */ > + for (i =3D 0; i < RTE_K32V64_KEYS_PER_BUCKET; i++) { > + if ((key =3D=3D table->t[bucket].key[i]) && > + (table->t[bucket].key_mask & (1 << i))) { > + table->t[bucket].val[i] =3D value; > + return 0; > + } > + } > + > + if (!SLIST_EMPTY(&table->t[bucket].head)) { > + SLIST_FOREACH(ent, &table->t[bucket].head, next) { > + if (ent->key =3D=3D key) { > + ent->val =3D value; > + return 0; > + } > + } > + } > + > + msk =3D ~table->t[bucket].key_mask & VALID_KEY_MSK; > + if (msk) { > + idx =3D __builtin_ctz(msk); > + table->t[bucket].key[idx] =3D key; > + table->t[bucket].val[idx] =3D value; > + rte_smp_wmb(); > + table->t[bucket].key_mask |=3D 1 << idx; > + table->nb_ent++; > + return 0; > + } > + > + ret =3D rte_mempool_get(table->ext_ent_pool, (void **)&ent); > + if (ret < 0) > + return ret; > + > + SLIST_NEXT(ent, next) =3D NULL; > + ent->key =3D key; > + ent->val =3D value; > + rte_smp_wmb(); > + SLIST_FOREACH(tmp, &table->t[bucket].head, next) > + prev =3D tmp; > + > + if (prev =3D=3D NULL) > + SLIST_INSERT_HEAD(&table->t[bucket].head, ent, next); > + else > + SLIST_INSERT_AFTER(prev, ent, next); > + > + table->nb_ent++; > + table->nb_ext_ent++; > + return 0; > +} > + > +int > +rte_k32v64_hash_delete(struct rte_k32v64_hash_table *table, uint32_t key= , > + uint32_t hash) > +{ > + uint32_t bucket; > + int i; > + struct rte_k32v64_ext_ent *ent; > + > + if (table =3D=3D NULL) > + return -EINVAL; > + > + bucket =3D hash & table->bucket_msk; > + > + for (i =3D 0; i < RTE_K32V64_KEYS_PER_BUCKET; i++) { > + if ((key =3D=3D table->t[bucket].key[i]) && > + (table->t[bucket].key_mask & (1 << i))) { > + ent =3D SLIST_FIRST(&table->t[bucket].head); > + if (ent) { > + rte_atomic32_inc(&table->t[bucket].cnt); I know that right now rte_atomic32 uses _sync gcc builtins underneath,=20 so it should be safe. But I think the proper way would be: table->t[bucket].cnt++; rte_smp_wmb();=09 or as alternative probably use C11 atomic ACQUIRE/RELEASE > + table->t[bucket].key[i] =3D ent->key; > + table->t[bucket].val[i] =3D ent->val; > + SLIST_REMOVE_HEAD(&table->t[bucket].head, next); > + rte_atomic32_inc(&table->t[bucket].cnt); > + table->nb_ext_ent--; > + } else > + table->t[bucket].key_mask &=3D ~(1 << i); I think you protect that update with bucket.cnt. >From my perspective -a s a rule of thumb any update to the bucket/list Should be within that transaction-start/transaction-end. > + if (ent) > + rte_mempool_put(table->ext_ent_pool, ent); > + table->nb_ent--; > + return 0; > + } > + } > + > + SLIST_FOREACH(ent, &table->t[bucket].head, next) > + if (ent->key =3D=3D key) > + break; > + > + if (ent =3D=3D NULL) > + return -ENOENT; > + > + rte_atomic32_inc(&table->t[bucket].cnt); > + SLIST_REMOVE(&table->t[bucket].head, ent, rte_k32v64_ext_ent, next); > + rte_atomic32_inc(&table->t[bucket].cnt); > + rte_mempool_put(table->ext_ent_pool, ent); > + > + table->nb_ext_ent--; > + table->nb_ent--; > + > + return 0; > +} > + > +struct rte_k32v64_hash_table * > +rte_k32v64_hash_find_existing(const char *name) > +{ > + struct rte_k32v64_hash_table *h =3D NULL; > + struct rte_tailq_entry *te; > + struct rte_k32v64_hash_list *k32v64_hash_list; > + > + k32v64_hash_list =3D RTE_TAILQ_CAST(rte_k32v64_hash_tailq.head, > + rte_k32v64_hash_list); > + > + rte_mcfg_tailq_read_lock(); > + TAILQ_FOREACH(te, k32v64_hash_list, next) { > + h =3D (struct rte_k32v64_hash_table *) te->data; > + if (strncmp(name, h->name, RTE_K32V64_HASH_NAMESIZE) =3D=3D 0) > + break; > + } > + rte_mcfg_tailq_read_unlock(); > + if (te =3D=3D NULL) { > + rte_errno =3D ENOENT; > + return NULL; > + } > + return h; > +} > + > +struct rte_k32v64_hash_table * > +rte_k32v64_hash_create(const struct rte_k32v64_hash_params *params) > +{ > + char hash_name[RTE_K32V64_HASH_NAMESIZE]; > + struct rte_k32v64_hash_table *ht =3D NULL; > + struct rte_tailq_entry *te; > + struct rte_k32v64_hash_list *k32v64_hash_list; > + uint32_t mem_size, nb_buckets, max_ent; > + int ret; > + struct rte_mempool *mp; > + > + if ((params =3D=3D NULL) || (params->name =3D=3D NULL) || > + (params->entries =3D=3D 0)) { > + rte_errno =3D EINVAL; > + return NULL; > + } > + > + k32v64_hash_list =3D RTE_TAILQ_CAST(rte_k32v64_hash_tailq.head, > + rte_k32v64_hash_list); > + > + ret =3D snprintf(hash_name, sizeof(hash_name), "K32V64_%s", params->nam= e); > + if (ret < 0 || ret >=3D RTE_K32V64_HASH_NAMESIZE) { > + rte_errno =3D ENAMETOOLONG; > + return NULL; > + } > + > + max_ent =3D rte_align32pow2(params->entries); > + nb_buckets =3D max_ent / RTE_K32V64_KEYS_PER_BUCKET; > + mem_size =3D sizeof(struct rte_k32v64_hash_table) + > + sizeof(struct rte_k32v64_hash_bucket) * nb_buckets; > + > + mp =3D rte_mempool_create(hash_name, max_ent, > + sizeof(struct rte_k32v64_ext_ent), 0, 0, NULL, NULL, NULL, NULL, > + params->socket_id, 0); > + > + if (mp =3D=3D NULL) > + return NULL; > + > + rte_mcfg_tailq_write_lock(); > + TAILQ_FOREACH(te, k32v64_hash_list, next) { > + ht =3D (struct rte_k32v64_hash_table *) te->data; > + if (strncmp(params->name, ht->name, > + RTE_K32V64_HASH_NAMESIZE) =3D=3D 0) > + break; > + } > + ht =3D NULL; > + if (te !=3D NULL) { > + rte_errno =3D EEXIST; > + rte_mempool_free(mp); > + goto exit; > + } > + > + te =3D rte_zmalloc("K32V64_HASH_TAILQ_ENTRY", sizeof(*te), 0); > + if (te =3D=3D NULL) { > + RTE_LOG(ERR, HASH, "Failed to allocate tailq entry\n"); > + rte_mempool_free(mp); > + goto exit; > + } > + > + ht =3D rte_zmalloc_socket(hash_name, mem_size, > + RTE_CACHE_LINE_SIZE, params->socket_id); > + if (ht =3D=3D NULL) { > + RTE_LOG(ERR, HASH, "Failed to allocate fbk hash table\n"); > + rte_free(te); > + rte_mempool_free(mp); > + goto exit; > + } > + > + memcpy(ht->name, hash_name, sizeof(ht->name)); > + ht->max_ent =3D max_ent; > + ht->bucket_msk =3D nb_buckets - 1; > + ht->ext_ent_pool =3D mp; > + ht->lookup =3D get_lookup_bulk_fn(); > + > + te->data =3D (void *)ht; > + TAILQ_INSERT_TAIL(k32v64_hash_list, te, next); > + > +exit: > + rte_mcfg_tailq_write_unlock(); > + > + return ht; > +} > + > +void > +rte_k32v64_hash_free(struct rte_k32v64_hash_table *ht) > +{ > + struct rte_tailq_entry *te; > + struct rte_k32v64_hash_list *k32v64_hash_list; > + > + if (ht =3D=3D NULL) > + return; > + > + k32v64_hash_list =3D RTE_TAILQ_CAST(rte_k32v64_hash_tailq.head, > + rte_k32v64_hash_list); > + > + rte_mcfg_tailq_write_lock(); > + > + /* find out tailq entry */ > + TAILQ_FOREACH(te, k32v64_hash_list, next) { > + if (te->data =3D=3D (void *) ht) > + break; > + } > + > + > + if (te =3D=3D NULL) { > + rte_mcfg_tailq_write_unlock(); > + return; > + } > + > + TAILQ_REMOVE(k32v64_hash_list, te, next); > + > + rte_mcfg_tailq_write_unlock(); > + > + rte_mempool_free(ht->ext_ent_pool); > + rte_free(ht); > + rte_free(te); > +} > diff --git a/lib/librte_hash/rte_k32v64_hash.h b/lib/librte_hash/rte_k32v= 64_hash.h > new file mode 100644 > index 0000000..b2c52e9 > --- /dev/null > +++ b/lib/librte_hash/rte_k32v64_hash.h > @@ -0,0 +1,211 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > + > +#ifndef _RTE_K32V64_HASH_H_ > +#define _RTE_K32V64_HASH_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include > +#include > +#include > + > +#define RTE_K32V64_HASH_NAMESIZE 32 > +#define RTE_K32V64_KEYS_PER_BUCKET 4 > +#define RTE_K32V64_WRITE_IN_PROGRESS 1 > + > +struct rte_k32v64_hash_params { > + const char *name; > + uint32_t entries; > + int socket_id; > +}; > + > +struct rte_k32v64_ext_ent { > + SLIST_ENTRY(rte_k32v64_ext_ent) next; > + uint32_t key; > + uint64_t val; > +}; > + > +struct rte_k32v64_hash_bucket { > + uint32_t key[RTE_K32V64_KEYS_PER_BUCKET]; > + uint64_t val[RTE_K32V64_KEYS_PER_BUCKET]; > + uint8_t key_mask; > + rte_atomic32_t cnt; > + SLIST_HEAD(rte_k32v64_list_head, rte_k32v64_ext_ent) head; > +} __rte_cache_aligned; > + > +struct rte_k32v64_hash_table; > + > +typedef int (*rte_k32v64_hash_bulk_lookup_t) > +(struct rte_k32v64_hash_table *table, uint32_t *keys, uint32_t *hashes, > + uint64_t *values, unsigned int n); > + > +struct rte_k32v64_hash_table { > + char name[RTE_K32V64_HASH_NAMESIZE]; /**< Name of the hash. */ > + uint32_t nb_ent; /**< Number of entities in the table*/ > + uint32_t nb_ext_ent; /**< Number of extended entities */ > + uint32_t max_ent; /**< Maximum number of entities */ > + uint32_t bucket_msk; > + struct rte_mempool *ext_ent_pool; > + rte_k32v64_hash_bulk_lookup_t lookup; > + __extension__ struct rte_k32v64_hash_bucket t[0]; > +}; > + > +typedef int (*rte_k32v64_cmp_fn_t) > +(struct rte_k32v64_hash_bucket *bucket, uint32_t key, uint64_t *val); > + > +static inline int > +__k32v64_cmp_keys(struct rte_k32v64_hash_bucket *bucket, uint32_t key, > + uint64_t *val) > +{ > + int i; > + > + for (i =3D 0; i < RTE_K32V64_KEYS_PER_BUCKET; i++) { > + if ((key =3D=3D bucket->key[i]) && > + (bucket->key_mask & (1 << i))) { > + *val =3D bucket->val[i]; > + return 1; > + } > + } > + > + return 0; > +} > + > +static inline int > +__k32v64_hash_lookup(struct rte_k32v64_hash_table *table, uint32_t key, > + uint32_t hash, uint64_t *value, rte_k32v64_cmp_fn_t cmp_f) > +{ > + uint64_t val =3D 0; > + struct rte_k32v64_ext_ent *ent; > + int32_t cnt; > + int found =3D 0; > + uint32_t bucket =3D hash & table->bucket_msk; > + > + do { > + do > + cnt =3D rte_atomic32_read(&table->t[bucket].cnt); > + while (unlikely(cnt & RTE_K32V64_WRITE_IN_PROGRESS)); > + > + found =3D cmp_f(&table->t[bucket], key, &val); > + if (unlikely((found =3D=3D 0) && > + (!SLIST_EMPTY(&table->t[bucket].head)))) { > + SLIST_FOREACH(ent, &table->t[bucket].head, next) { > + if (ent->key =3D=3D key) { > + val =3D ent->val; > + found =3D 1; > + break; > + } > + } > + } > + > + } while (unlikely(cnt !=3D rte_atomic32_read(&table->t[bucket].cnt))); AFAIK atomic32_read is just a normal read op, so it can be reordered with o= ther ops. So this construction doesn't protect you from races. What you probably need here: do { cnt1 =3D table->t[bucket].cnt; rte_smp_rmb(); .... rte_smp_rmb(); cnt2 =3D table->t[bucket].cnt; while (cnt1 !=3D cnt2 || (cnt1 & RTE_K32V64_WRITE_IN_PROGRESS) !=3D 0) > + > + if (found =3D=3D 1) { > + *value =3D val; > + return 0; > + } else > + return -ENOENT; > +} > +