From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F230A04B1; Tue, 25 Aug 2020 21:59:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0C9DEE07; Tue, 25 Aug 2020 21:59:22 +0200 (CEST) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2046.outbound.protection.outlook.com [40.107.21.46]) by dpdk.org (Postfix) with ESMTP id 4BF43B62 for ; Tue, 25 Aug 2020 21:59:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6kw+IHMgkNEmQkI/95OFQeq21GGI+DzRqaX2PcsaXl8=; b=exc3/pwmZ5kfKR+vmTpmYDpaWn2tcOzl73xY77rZ6j+Fib/WN4pU7To0ytjVpxQzkjhZCdexC2vaSUKVP0EqpTIruEYp8u77cE5xl8f/zMUpRyER7GySjIck1tO61L5/FCj5fVl6EaeSsA/PA5QbVFD9/rYa5BlLkGIzOO7m/rI= Received: from DB6PR0201CA0042.eurprd02.prod.outlook.com (2603:10a6:4:3f::52) by AM6PR08MB4935.eurprd08.prod.outlook.com (2603:10a6:20b:d5::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3305.26; Tue, 25 Aug 2020 19:59:18 +0000 Received: from DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com (2603:10a6:4:3f:cafe::6e) by DB6PR0201CA0042.outlook.office365.com (2603:10a6:4:3f::52) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3305.25 via Frontend Transport; Tue, 25 Aug 2020 19:59:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DB5EUR03FT020.mail.protection.outlook.com (10.152.20.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3326.19 via Frontend Transport; Tue, 25 Aug 2020 19:59:18 +0000 Received: ("Tessian outbound 7161e0c2a082:v64"); Tue, 25 Aug 2020 19:59:18 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 4290c16ed139f2b5 X-CR-MTA-TID: 64aa7808 Received: from 925740f3ddc6.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 6A06A281-6776-4DBB-B440-475A208C4E69.1; Tue, 25 Aug 2020 19:59:12 +0000 Received: from EUR04-HE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 925740f3ddc6.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Tue, 25 Aug 2020 19:59:12 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OJdhfUG6uSjeso/cvRe1vU9CeuH2cSolotCO8MDajLGwB94ZC77gNXSMMU6T9yMIgnJ+8DOmWLJAsMyinSDLZcbZ6aRa5MLNO+8t3zW196Su3k96GN5arVB8XoJxmpbHaMd0gRiUci0HWuJlOKceYnJThS6fWD4NCavmpLtnJzvh3EUkS/OpwROEBEtgyuEOwq0Ycnj/GC6dOHw62fqpCc0IIEMcU0GIIyt61/HV7dyq5Y+xxgN7eoroSM/GBGbVYzPO1Bz/ghu4CkTfB9xRFqAxj0nGatLCu18O8Pwp99YCOSXdIWdestqVjezsewmYK3XKoH5SwhYSOGUqVzhf7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6kw+IHMgkNEmQkI/95OFQeq21GGI+DzRqaX2PcsaXl8=; b=ZLuc8LZ9E7Gk3dI/2aaHI1eL4qXWrEMsjEQaPIz2GyImaG0BVmyfE60yA0tlqWffeoRSy5A84X2ZX11aSbTuqy/4jzfN4i35tjzDwMavU9Grz6zocB7vMp0fl/3xYM0+ytY8EHTjMf2dhMg0tdU/l+pnYXbnHUvqBOxfxJD8MJsZ5phwhtGlEXdG0R7v4BI+n6n8R8foGKX48TF9Yl0mRDuHUYRa9+3vbI1NRfGAbHgZ+gqfZVwcD5n2bIiuPXMVoUdjZjMdEsKZZOjGTGtSPc2MRDPPcHDHpoQWwvhwf7g15MMwOHoqDZ8DCPzz9FBgv8g2V16HM/oPJxvWOaXxfw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6kw+IHMgkNEmQkI/95OFQeq21GGI+DzRqaX2PcsaXl8=; b=exc3/pwmZ5kfKR+vmTpmYDpaWn2tcOzl73xY77rZ6j+Fib/WN4pU7To0ytjVpxQzkjhZCdexC2vaSUKVP0EqpTIruEYp8u77cE5xl8f/zMUpRyER7GySjIck1tO61L5/FCj5fVl6EaeSsA/PA5QbVFD9/rYa5BlLkGIzOO7m/rI= Received: from VI1PR08MB4622.eurprd08.prod.outlook.com (2603:10a6:803:bc::17) by VI1PR08MB3135.eurprd08.prod.outlook.com (2603:10a6:803:47::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3305.24; Tue, 25 Aug 2020 19:59:10 +0000 Received: from VI1PR08MB4622.eurprd08.prod.outlook.com ([fe80::80cb:e912:493f:94fa]) by VI1PR08MB4622.eurprd08.prod.outlook.com ([fe80::80cb:e912:493f:94fa%7]) with mapi id 15.20.3305.031; Tue, 25 Aug 2020 19:59:10 +0000 From: Dharmik Thakkar To: Yipeng Wang , Sameh Gobriel , Bruce Richardson , Ray Kinsella , Neil Horman CC: dpdk-dev , nd Thread-Topic: [RFC v2] lib/hash: integrate RCU QSBR Thread-Index: AQHWdd4o75YAXmtVhEC1/kBgRDf+V6lJSKeA Date: Tue, 25 Aug 2020 19:59:10 +0000 Message-ID: <3A9AA227-C523-4B94-AEA2-B9838741DD28@arm.com> References: <20190901065810.15137-1-dharmik.thakkar@arm.com> <20200819040537.1792-1-dharmik.thakkar@arm.com> In-Reply-To: <20200819040537.1792-1-dharmik.thakkar@arm.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Authentication-Results-Original: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=arm.com; x-originating-ip: [72.182.82.154] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 3cc5efd8-5b35-4d7c-d6ff-08d8493159c2 x-ms-traffictypediagnostic: VI1PR08MB3135:|AM6PR08MB4935: X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true nodisclaimer: true x-ms-oob-tlc-oobclassifiers: OLM:1265;OLM:1265; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 0m2pZZ3K0Q0SzPqAcr3/ywzaLk4tCER/O/d2Kpwdj7XYRg00sNiApPIqGsljdvhNRyVxAFNpd3IgPQATLv2kD51HHIa36HwdXW1mGNXYBQaOeILM8wfWrmpQixITAjEseP59wDdFYU3WgEzKUgCd9XwyS8ER3PXqIPVhXhaoGhTUPqUbTXmV/5jw+nZp2P0rSRfqLaDwrMqBMRsdLSGXkZ+hKn3mcc/FF7xiZZRtG8nvG4SHQkztDMBnOVwPC0zL04lT3EzdBeTrheVw6y3KKwd/ouU81P9HuDBmfLMn9K4RMehgWKfh4dSRy3VAVo8bSb28DE28w49IJKfTpT9dmlUTie5agh9g5D9OqAp7zaSzE7568md0PAiShSt5rXasUwR4bLRMmoB18MXIKGQO8e2+Q774iD56b6m1XJKa/VPl/cGgJEU8k7trRHuPPbJTV6l9x1nvdsfvH9SSZR5Irg== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB4622.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(346002)(376002)(366004)(39850400004)(396003)(136003)(66476007)(66446008)(64756008)(66946007)(76116006)(6512007)(30864003)(91956017)(478600001)(53546011)(5660300002)(316002)(4326008)(186003)(110136005)(26005)(966005)(71200400001)(6506007)(8676002)(54906003)(66556008)(6486002)(2616005)(8936002)(36756003)(86362001)(33656002)(83380400001)(2906002)(21314003)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: CVxNSqFwA0DjZZZaS3EYiJ1SschwMzrnEtrP6DwKhgB5yFTrbCpvB7muNefAr6ul4ixw/f0PVLmy8fZlcdIxbFKKajaPljgC1Gmh3TrKZ17gE6DS1v+PRmDRdtuT3BzKtwHe4BS9EnZC4MhHA++jfncJIrfRuAmOcRkwLY1TaaOh7BAXQnTZANN0Zv6GcRE9H43p1NtwAkwaiLehWr2yrvpmoAlxm9xxbPuRgQyTVw3f6z/1RW9vnkdyMKZQfoipjPqLofM2e/sE3CeURq5POXVmc5rWc7ifywK9x9p3yG3DtssHks2ta1O2Ei4W6d3Rsgvxg7tSY4GO6qxRY70h0jR4cNMYyl+yUi23RCJJQqIKn4UejHOo2LNWuPlPTOJ0yTzdLbp43UgX+imP687nlo6vokEVy5jZimFz6cqYDRYCEYA+NRGdzHHsLotMeJ7knUcyJwR9aog5691PecHgQeo4p1puSuvf3TGbWtZFhstlTwSxZWwDfv3BWBb2Xux1lgLjEO9Jesu33a7Bxmtf0m6HSa16NW2kn/kju94g7Wj59P2WLCXRCWrDgSHH5El4SoZdN187xf2Bn8DRglA4Bkq5AsyrXyiD2xZwon4YKaDIl6uQSDjOUJaOZt2pSMZEO8WYWU4qaW1JtKrLsHYnKA== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3135 Original-Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: 933bb2e4-ebcc-4209-674b-08d849315511 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IZzprobMA6/NQC0cc6cEb0MAe1ZNgrrXyFDeBWwLBciprm6+CcRx9Gf+8bQWmyAOpFX28WLZ2IpIy1TGWKTpHSpArPQNeLQ6+sGNHRGeQEle4df5HQ+nV/jqcB2Fg8POzNu/kF9dRSVdOzj52eJh+aPW8LBrH2N6jmDfBlXifIXO/OQv20FIMD6jJoMbEh4lbccpIAIswhuv0UoUBT643aBe/5lq0Ab7Nm3K2aYYFjHImOwomZY9oPxtmO8wsf/0frlnvPwxU1UX+D9HMVWZcynAWQ568rln/OjHpayLCVXWMeeJioiPn5+auPTQugJRFQFWLuerosgEydavDCgjz+LaguMUMBIVHnrT9rgNfNLNKVxMHQ1t0qrHsnAL9tOkMtVGGfYkFi0yVAAxaUJW4y1r3YyZsN16LTpicxBxc7lkgVGEIle8yWEDigKVyfKBNXVHs7Ev+X4MashgXJIhIgaPqb5NyQCkF3Rckx7SUViQRvh3afhowj/BR1QUz1sT X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(4636009)(346002)(396003)(39850400004)(136003)(376002)(46966005)(8936002)(356005)(36756003)(70206006)(83380400001)(82740400003)(70586007)(6486002)(81166007)(47076004)(2906002)(6512007)(82310400002)(8676002)(2616005)(316002)(4326008)(6506007)(186003)(26005)(478600001)(336012)(53546011)(54906003)(33656002)(966005)(86362001)(110136005)(30864003)(5660300002)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Aug 2020 19:59:18.1021 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3cc5efd8-5b35-4d7c-d6ff-08d8493159c2 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4935 Subject: Re: [dpdk-dev] [RFC v2] lib/hash: integrate RCU QSBR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" CI has reported some compilation issues for this patch. I will resolve thes= e issues once the RFC patch is approved. Thank you! > On Aug 18, 2020, at 11:05 PM, Dharmik Thakkar w= rote: >=20 > Integrate RCU QSBR to make it easier for the applications to use lock > free algorithm. >=20 > Resource reclamation implementation was split from the original > series, and has already been part of RCU library. Rework the series > to base hash integration on RCU reclamation APIs. >=20 > Refer 'Resource reclamation framework for DPDK' available at [1] > to understand various aspects of integrating RCU library > into other libraries. >=20 > [1] https://doc.dpdk.org/guides/prog_guide/rcu_lib.html >=20 > Introduce a new API rte_hash_rcu_qsbr_add for application to > register a RCU variable that hash library will use. >=20 > Suggested-by: Honnappa Nagarahalli > Signed-off-by: Dharmik Thakkar > Reviewed-by: Ruifeng Wang > --- > v2: > - Remove defer queue related functions and use resource reclamation > APIs from the RCU QSBR library instead >=20 > - Remove patch (net/ixgbe: avoid multpile definitions of 'bool') > from the series as it is already accepted >=20 > --- > lib/Makefile | 2 +- > lib/librte_hash/Makefile | 2 +- > lib/librte_hash/meson.build | 1 + > lib/librte_hash/rte_cuckoo_hash.c | 291 +++++++++++++++++++++------ > lib/librte_hash/rte_cuckoo_hash.h | 8 + > lib/librte_hash/rte_hash.h | 75 ++++++- > lib/librte_hash/rte_hash_version.map | 2 +- > 7 files changed, 308 insertions(+), 73 deletions(-) >=20 > diff --git a/lib/Makefile b/lib/Makefile > index 8f5b68a2d469..dffe31c829f0 100644 > --- a/lib/Makefile > +++ b/lib/Makefile > @@ -51,7 +51,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_VHOST) +=3D librte_vhost > DEPDIRS-librte_vhost :=3D librte_eal librte_mempool librte_mbuf librte_et= hdev \ > librte_net librte_hash librte_cryptodev > DIRS-$(CONFIG_RTE_LIBRTE_HASH) +=3D librte_hash > -DEPDIRS-librte_hash :=3D librte_eal librte_ring > +DEPDIRS-librte_hash :=3D librte_eal librte_ring librte_rcu > DIRS-$(CONFIG_RTE_LIBRTE_EFD) +=3D librte_efd > DEPDIRS-librte_efd :=3D librte_eal librte_ring librte_hash > DIRS-$(CONFIG_RTE_LIBRTE_RIB) +=3D librte_rib > diff --git a/lib/librte_hash/Makefile b/lib/librte_hash/Makefile > index ec9f86499262..10e697f48652 100644 > --- a/lib/librte_hash/Makefile > +++ b/lib/librte_hash/Makefile > @@ -8,7 +8,7 @@ LIB =3D librte_hash.a >=20 > CFLAGS +=3D -O3 > CFLAGS +=3D $(WERROR_FLAGS) -I$(SRCDIR) > -LDLIBS +=3D -lrte_eal -lrte_ring > +LDLIBS +=3D -lrte_eal -lrte_ring -lrte_rcu >=20 > EXPORT_MAP :=3D rte_hash_version.map >=20 > diff --git a/lib/librte_hash/meson.build b/lib/librte_hash/meson.build > index 6ab46ae9d768..0977a63fd279 100644 > --- a/lib/librte_hash/meson.build > +++ b/lib/librte_hash/meson.build > @@ -10,3 +10,4 @@ headers =3D files('rte_crc_arm64.h', >=20 > sources =3D files('rte_cuckoo_hash.c', 'rte_fbk_hash.c') > deps +=3D ['ring'] > +deps +=3D ['rcu'] > diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuck= oo_hash.c > index 0a6d474713a2..01c2cbe0e38b 100644 > --- a/lib/librte_hash/rte_cuckoo_hash.c > +++ b/lib/librte_hash/rte_cuckoo_hash.c > @@ -52,6 +52,11 @@ static struct rte_tailq_elem rte_hash_tailq =3D { > }; > EAL_REGISTER_TAILQ(rte_hash_tailq) >=20 > +struct __rte_hash_rcu_dq_entry { > + uint32_t key_idx; > + uint32_t ext_bkt_idx; /**< Extended bkt index */ > +}; > + > struct rte_hash * > rte_hash_find_existing(const char *name) > { > @@ -210,7 +215,10 @@ rte_hash_create(const struct rte_hash_parameters *pa= rams) >=20 > if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF) { > readwrite_concur_lf_support =3D 1; > - /* Enable not freeing internal memory/index on delete */ > + /* Enable not freeing internal memory/index on delete. > + * If internal RCU is enabled, freeing of internal memory/index > + * is done on delete > + */ > no_free_on_del =3D 1; > } >=20 > @@ -505,6 +513,10 @@ rte_hash_free(struct rte_hash *h) >=20 > rte_mcfg_tailq_write_unlock(); >=20 > + /* RCU clean up. */ > + if (h->dq) > + rte_rcu_qsbr_dq_delete(h->dq); > + > if (h->use_local_cache) > rte_free(h->local_free_slots); > if (h->writer_takes_lock) > @@ -612,6 +624,16 @@ rte_hash_reset(struct rte_hash *h) > return; >=20 > __hash_rw_writer_lock(h); > + > + /* RCU clean up. */ > + if (h->hash_rcu_cfg->v) { > + rte_rcu_qsbr_dq_delete(h->dq); > + h->dq =3D NULL; > + if (rte_hash_rcu_qsbr_add(h, h->hash_rcu_cfg) < 0) { > + RTE_LOG(ERR, HASH, "RCU init failed\n"); > + return; > + } > + } > memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket)); > memset(h->key_store, 0, h->key_entry_size * (h->entries + 1)); > *h->tbl_chng_cnt =3D 0; > @@ -952,6 +974,37 @@ rte_hash_cuckoo_make_space_mw(const struct rte_hash = *h, > return -ENOSPC; > } >=20 > +static inline uint32_t > +alloc_slot(const struct rte_hash *h, struct lcore_cache *cached_free_slo= ts) > +{ > + unsigned int n_slots; > + uint32_t slot_id; > + if (h->use_local_cache) { > + /* Try to get a free slot from the local cache */ > + if (cached_free_slots->len =3D=3D 0) { > + /* Need to get another burst of free slots from global ring */ > + n_slots =3D rte_ring_mc_dequeue_burst_elem(h->free_slots, > + cached_free_slots->objs, > + sizeof(uint32_t), > + LCORE_CACHE_SIZE, NULL); > + if (n_slots =3D=3D 0) > + return EMPTY_SLOT; > + > + cached_free_slots->len +=3D n_slots; > + } > + > + /* Get a free slot from the local cache */ > + cached_free_slots->len--; > + slot_id =3D cached_free_slots->objs[cached_free_slots->len]; > + } else { > + if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, > + sizeof(uint32_t)) !=3D 0) > + return EMPTY_SLOT; > + } > + > + return slot_id; > +} > + > static inline int32_t > __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, > hash_sig_t sig, void *data) > @@ -963,7 +1016,6 @@ __rte_hash_add_key_with_hash(const struct rte_hash *= h, const void *key, > uint32_t ext_bkt_id =3D 0; > uint32_t slot_id; > int ret; > - unsigned n_slots; > unsigned lcore_id; > unsigned int i; > struct lcore_cache *cached_free_slots =3D NULL; > @@ -1001,28 +1053,19 @@ __rte_hash_add_key_with_hash(const struct rte_has= h *h, const void *key, > if (h->use_local_cache) { > lcore_id =3D rte_lcore_id(); > cached_free_slots =3D &h->local_free_slots[lcore_id]; > - /* Try to get a free slot from the local cache */ > - if (cached_free_slots->len =3D=3D 0) { > - /* Need to get another burst of free slots from global ring */ > - n_slots =3D rte_ring_mc_dequeue_burst_elem(h->free_slots, > - cached_free_slots->objs, > - sizeof(uint32_t), > - LCORE_CACHE_SIZE, NULL); > - if (n_slots =3D=3D 0) { > - return -ENOSPC; > + } > + slot_id =3D alloc_slot(h, cached_free_slots); > + if (slot_id =3D=3D EMPTY_SLOT) { > + if (h->hash_rcu_cfg->v) { > + if (rte_rcu_qsbr_dq_reclaim(h->dq, > + h->hash_rcu_cfg->reclaim_max, > + NULL, NULL, NULL) > + =3D=3D 0) { > + slot_id =3D alloc_slot(h, cached_free_slots); > } > - > - cached_free_slots->len +=3D n_slots; > } > - > - /* Get a free slot from the local cache */ > - cached_free_slots->len--; > - slot_id =3D cached_free_slots->objs[cached_free_slots->len]; > - } else { > - if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, > - sizeof(uint32_t)) !=3D 0) { > + if (slot_id =3D=3D EMPTY_SLOT) > return -ENOSPC; > - } > } >=20 > new_k =3D RTE_PTR_ADD(keys, slot_id * h->key_entry_size); > @@ -1118,8 +1161,20 @@ __rte_hash_add_key_with_hash(const struct rte_hash= *h, const void *key, > if (rte_ring_sc_dequeue_elem(h->free_ext_bkts, &ext_bkt_id, > sizeof(uint32_t)) !=3D 0 || > ext_bkt_id =3D=3D 0) { > - ret =3D -ENOSPC; > - goto failure; > + if (h->hash_rcu_cfg->v) { > + if (rte_rcu_qsbr_dq_reclaim(h->dq, > + h->hash_rcu_cfg->reclaim_max, > + NULL, NULL, NULL) > + =3D=3D 0) { > + rte_ring_sc_dequeue_elem(h->free_ext_bkts, > + &ext_bkt_id, > + sizeof(uint32_t)); > + } > + } > + if (ext_bkt_id =3D=3D 0) { > + ret =3D -ENOSPC; > + goto failure; > + } > } >=20 > /* Use the first location of the new bucket */ > @@ -1395,12 +1450,12 @@ rte_hash_lookup_data(const struct rte_hash *h, co= nst void *key, void **data) > return __rte_hash_lookup_with_hash(h, key, rte_hash_hash(h, key), data); > } >=20 > -static inline void > -remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsi= gned i) > +static int > +free_slot(const struct rte_hash *h, uint32_t slot_id) > { > unsigned lcore_id, n_slots; > struct lcore_cache *cached_free_slots; > - > + /* Return key indexes to free slot ring */ > if (h->use_local_cache) { > lcore_id =3D rte_lcore_id(); > cached_free_slots =3D &h->local_free_slots[lcore_id]; > @@ -1411,18 +1466,122 @@ remove_entry(const struct rte_hash *h, struct rt= e_hash_bucket *bkt, unsigned i) > cached_free_slots->objs, > sizeof(uint32_t), > LCORE_CACHE_SIZE, NULL); > - ERR_IF_TRUE((n_slots =3D=3D 0), > - "%s: could not enqueue free slots in global ring\n", > - __func__); > + RETURN_IF_TRUE((n_slots =3D=3D 0), -EFAULT); > cached_free_slots->len -=3D n_slots; > } > - /* Put index of new free slot in cache. */ > - cached_free_slots->objs[cached_free_slots->len] =3D > - bkt->key_idx[i]; > - cached_free_slots->len++; > + } > + > + enqueue_slot_back(h, cached_free_slots, slot_id); > + return 0; > +} > + > +static void > +__hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n) > +{ > + void *key_data =3D NULL; > + int ret; > + struct rte_hash_key *keys, *k; > + struct rte_hash *h =3D (struct rte_hash *)p; > + struct __rte_hash_rcu_dq_entry rcu_dq_entry =3D > + *((struct __rte_hash_rcu_dq_entry *)e); > + > + RTE_SET_USED(n); > + keys =3D h->key_store; > + > + k =3D (struct rte_hash_key *) ((char *)keys + > + rcu_dq_entry.key_idx * h->key_entry_size); > + key_data =3D k->pdata; > + if (h->hash_rcu_cfg->free_key_data_func) > + h->hash_rcu_cfg->free_key_data_func(h->hash_rcu_cfg->key_data_ptr, > + key_data); > + > + if (h->ext_table_support && rcu_dq_entry.ext_bkt_idx !=3D EMPTY_SLOT) > + /* Recycle empty ext bkt to free list. */ > + rte_ring_sp_enqueue_elem(h->free_ext_bkts, > + &rcu_dq_entry.ext_bkt_idx, sizeof(uint32_t)); > + > + /* Return key indexes to free slot ring */ > + ret =3D free_slot(h, rcu_dq_entry.key_idx); > + if (ret < 0) { > + RTE_LOG(ERR, HASH, > + "%s: could not enqueue free slots in global ring\n", > + __func__); > + } > +} > + > +int > +rte_hash_rcu_qsbr_add(struct rte_hash *h, > + struct rte_hash_rcu_config *cfg) > +{ > + struct rte_rcu_qsbr_dq_parameters params =3D {0}; > + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE]; > + struct rte_hash_rcu_config *hash_rcu_cfg =3D NULL; > + > + const uint32_t total_entries =3D h->use_local_cache ? > + h->entries + (RTE_MAX_LCORE - 1) * (LCORE_CACHE_SIZE - 1) + 1 > + : h->entries + 1; > + > + if ((h =3D=3D NULL) || cfg =3D=3D NULL) { > + rte_errno =3D EINVAL; > + return 1; > + } > + > + hash_rcu_cfg =3D rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), = 0); > + if (hash_rcu_cfg =3D=3D NULL) { > + RTE_LOG(ERR, HASH, "memory allocation failed\n"); > + return 1; > + } > + > + if (cfg->mode =3D=3D RTE_HASH_QSBR_MODE_SYNC) { > + /* No other things to do. */ > + } else if (cfg->mode =3D=3D RTE_HASH_QSBR_MODE_DQ) { > + /* Init QSBR defer queue. */ > + snprintf(rcu_dq_name, sizeof(rcu_dq_name), > + "HASH_RCU_%s", h->name); > + params.name =3D rcu_dq_name; > + params.size =3D cfg->dq_size; > + if (params.size =3D=3D 0) > + params.size =3D total_entries; > + params.trigger_reclaim_limit =3D cfg->reclaim_thd; > + if (params.max_reclaim_size =3D=3D 0) > + params.max_reclaim_size =3D RTE_HASH_RCU_DQ_RECLAIM_MAX; > + params.esize =3D sizeof(struct __rte_hash_rcu_dq_entry); > + params.free_fn =3D __hash_rcu_qsbr_free_resource; > + params.p =3D h; > + params.v =3D cfg->v; > + h->dq =3D rte_rcu_qsbr_dq_create(¶ms); > + if (h->dq =3D=3D NULL) { > + rte_free(hash_rcu_cfg); > + RTE_LOG(ERR, HASH, "HASH defer queue creation failed\n"); > + return 1; > + } > } else { > - rte_ring_sp_enqueue_elem(h->free_slots, > - &bkt->key_idx[i], sizeof(uint32_t)); > + rte_free(hash_rcu_cfg); > + rte_errno =3D EINVAL; > + return 1; > + } > + > + hash_rcu_cfg->v =3D cfg->v; > + hash_rcu_cfg->mode =3D cfg->mode; > + hash_rcu_cfg->dq_size =3D cfg->dq_size; > + hash_rcu_cfg->reclaim_thd =3D cfg->reclaim_thd; > + hash_rcu_cfg->reclaim_max =3D cfg->reclaim_max; > + hash_rcu_cfg->free_key_data_func =3D cfg->free_key_data_func; > + hash_rcu_cfg->key_data_ptr =3D cfg->key_data_ptr; > + > + h->hash_rcu_cfg =3D hash_rcu_cfg; > + > + return 0; > +} > + > +static inline void > +remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsi= gned i) > +{ > + int ret =3D free_slot(h, bkt->key_idx[i]); > + if (ret < 0) { > + RTE_LOG(ERR, HASH, > + "%s: could not enqueue free slots in global ring\n", > + __func__); > } > } >=20 > @@ -1521,6 +1680,8 @@ __rte_hash_del_key_with_hash(const struct rte_hash = *h, const void *key, > int pos; > int32_t ret, i; > uint16_t short_sig; > + uint32_t index =3D EMPTY_SLOT; > + struct __rte_hash_rcu_dq_entry rcu_dq_entry; >=20 > short_sig =3D get_short_sig(sig); > prim_bucket_idx =3D get_prim_bucket_index(h, sig); > @@ -1555,10 +1716,9 @@ __rte_hash_del_key_with_hash(const struct rte_hash= *h, const void *key, >=20 > /* Search last bucket to see if empty to be recycled */ > return_bkt: > - if (!last_bkt) { > - __hash_rw_writer_unlock(h); > - return ret; > - } > + if (!last_bkt) > + goto return_key; > + > while (last_bkt->next) { > prev_bkt =3D last_bkt; > last_bkt =3D last_bkt->next; > @@ -1571,11 +1731,11 @@ __rte_hash_del_key_with_hash(const struct rte_has= h *h, const void *key, > /* found empty bucket and recycle */ > if (i =3D=3D RTE_HASH_BUCKET_ENTRIES) { > prev_bkt->next =3D NULL; > - uint32_t index =3D last_bkt - h->buckets_ext + 1; > + index =3D last_bkt - h->buckets_ext + 1; > /* Recycle the empty bkt if > * no_free_on_del is disabled. > */ > - if (h->no_free_on_del) > + if (h->no_free_on_del) { > /* Store index of an empty ext bkt to be recycled > * on calling rte_hash_del_xxx APIs. > * When lock free read-write concurrency is enabled, > @@ -1583,12 +1743,32 @@ __rte_hash_del_key_with_hash(const struct rte_has= h *h, const void *key, > * immediately (as readers might be using it still). > * Hence freeing of the ext bkt is piggy-backed to > * freeing of the key index. > + * If using external RCU, store this index in an array. > */ > - h->ext_bkt_to_free[ret] =3D index; > - else > + if (h->hash_rcu_cfg->v =3D=3D NULL) > + h->ext_bkt_to_free[ret] =3D index; > + } else > rte_ring_sp_enqueue_elem(h->free_ext_bkts, &index, > sizeof(uint32_t)); > } > + > +return_key: > + /* Using internal RCU QSBR */ > + if (h->hash_rcu_cfg->v) { > + /* Key index where key is stored, adding the first dummy index */ > + rcu_dq_entry.key_idx =3D ret + 1; > + rcu_dq_entry.ext_bkt_idx =3D index; > + if (h->hash_rcu_cfg->mode =3D=3D RTE_HASH_QSBR_MODE_SYNC) { > + /* Wait for quiescent state change. */ > + rte_rcu_qsbr_synchronize(h->hash_rcu_cfg->v, > + RTE_QSBR_THRID_INVALID); > + __hash_rcu_qsbr_free_resource((void *)((uintptr_t)h), > + &rcu_dq_entry, 1); > + } else if (h->hash_rcu_cfg->mode =3D=3D RTE_HASH_QSBR_MODE_DQ) > + /* Push into QSBR FIFO. */ > + if (rte_rcu_qsbr_dq_enqueue(h->dq, &rcu_dq_entry) !=3D 0) > + RTE_LOG(ERR, HASH, "Failed to push QSBR FIFO\n"); > + } > __hash_rw_writer_unlock(h); > return ret; > } > @@ -1637,8 +1817,6 @@ rte_hash_free_key_with_position(const struct rte_ha= sh *h, >=20 > RETURN_IF_TRUE(((h =3D=3D NULL) || (key_idx =3D=3D EMPTY_SLOT)), -EINVAL= ); >=20 > - unsigned int lcore_id, n_slots; > - struct lcore_cache *cached_free_slots; > const uint32_t total_entries =3D h->use_local_cache ? > h->entries + (RTE_MAX_LCORE - 1) * (LCORE_CACHE_SIZE - 1) + 1 > : h->entries + 1; > @@ -1656,28 +1834,9 @@ rte_hash_free_key_with_position(const struct rte_h= ash *h, > } > } >=20 > - if (h->use_local_cache) { > - lcore_id =3D rte_lcore_id(); > - cached_free_slots =3D &h->local_free_slots[lcore_id]; > - /* Cache full, need to free it. */ > - if (cached_free_slots->len =3D=3D LCORE_CACHE_SIZE) { > - /* Need to enqueue the free slots in global ring. */ > - n_slots =3D rte_ring_mp_enqueue_burst_elem(h->free_slots, > - cached_free_slots->objs, > - sizeof(uint32_t), > - LCORE_CACHE_SIZE, NULL); > - RETURN_IF_TRUE((n_slots =3D=3D 0), -EFAULT); > - cached_free_slots->len -=3D n_slots; > - } > - /* Put index of new free slot in cache. */ > - cached_free_slots->objs[cached_free_slots->len] =3D key_idx; > - cached_free_slots->len++; > - } else { > - rte_ring_sp_enqueue_elem(h->free_slots, &key_idx, > - sizeof(uint32_t)); > - } > + /* Enqueue slot to cache/ring of free slots. */ > + return free_slot(h, key_idx); >=20 > - return 0; > } >=20 > static inline void > diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuck= oo_hash.h > index 345de6bf9cfd..85be49d3bbe7 100644 > --- a/lib/librte_hash/rte_cuckoo_hash.h > +++ b/lib/librte_hash/rte_cuckoo_hash.h > @@ -168,6 +168,11 @@ struct rte_hash { > struct lcore_cache *local_free_slots; > /**< Local cache per lcore, storing some indexes of the free slots */ >=20 > + /* RCU config */ > + struct rte_hash_rcu_config *hash_rcu_cfg; > + /**< HASH RCU QSBR configuration structure */ > + struct rte_rcu_qsbr_dq *dq; /**< RCU QSBR defer queue. */ > + > /* Fields used in lookup */ >=20 > uint32_t key_len __rte_cache_aligned; > @@ -230,4 +235,7 @@ struct queue_node { > int prev_slot; /* Parent(slot) in search path */ > }; >=20 > +/** @internal Default RCU defer queue entries to reclaim in one go. */ > +#define RTE_HASH_RCU_DQ_RECLAIM_MAX 16 > + > #endif > diff --git a/lib/librte_hash/rte_hash.h b/lib/librte_hash/rte_hash.h > index bff40251bc98..5431bcf4aeb1 100644 > --- a/lib/librte_hash/rte_hash.h > +++ b/lib/librte_hash/rte_hash.h > @@ -15,6 +15,7 @@ > #include >=20 > #include > +#include >=20 > #ifdef __cplusplus > extern "C" { > @@ -45,7 +46,8 @@ extern "C" { > /** Flag to disable freeing of key index on hash delete. > * Refer to rte_hash_del_xxx APIs for more details. > * This is enabled by default when RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF > - * is enabled. > + * is enabled. However, if internal RCU is enabled, freeing of internal > + * memory/index is done on delete > */ > #define RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL 0x10 >=20 > @@ -67,6 +69,13 @@ typedef uint32_t (*rte_hash_function)(const void *key,= uint32_t key_len, > /** Type of function used to compare the hash key. */ > typedef int (*rte_hash_cmp_eq_t)(const void *key1, const void *key2, size= _t key_len); >=20 > +/** > + * Type of function used to free data stored in the key. > + * Required when using internal RCU to allow application to free key-dat= a once > + * the key is returned to the the ring of free key-slots. > + */ > +typedef void (*rte_hash_free_key_data)(void *p, void *key_data); > + > /** > * Parameters used when creating the hash table. > */ > @@ -81,6 +90,39 @@ struct rte_hash_parameters { > uint8_t extra_flag; /**< Indicate if additional parameters are present.= */ > }; >=20 > +/** RCU reclamation modes */ > +enum rte_hash_qsbr_mode { > + /** Create defer queue for reclaim. */ > + RTE_HASH_QSBR_MODE_DQ =3D 0, > + /** Use blocking mode reclaim. No defer queue created. */ > + RTE_HASH_QSBR_MODE_SYNC > +}; > + > +/** HASH RCU QSBR configuration structure. */ > +struct rte_hash_rcu_config { > + struct rte_rcu_qsbr *v; /**< RCU QSBR variable. */ > + enum rte_hash_qsbr_mode mode; > + /**< Mode of RCU QSBR. RTE_HASH_QSBR_MODE_xxx > + * '0' for default: create defer queue for reclaim. > + */ > + uint32_t dq_size; > + /**< RCU defer queue size. > + * default: total hash table entries. > + */ > + uint32_t reclaim_thd; /**< Threshold to trigger auto reclaim. */ > + uint32_t reclaim_max; > + /**< Max entries to reclaim in one go. > + * default: RTE_HASH_RCU_DQ_RECLAIM_MAX. > + */ > + void *key_data_ptr; > + /**< Pointer passed to the free function. Typically, this is the > + * pointer to the data structure to which the resource to free > + * (key-data) belongs. This can be NULL. > + */ > + rte_hash_free_key_data free_key_data_func; > + /**< Function to call to free the resource (key-data). */ > +}; > + > /** @internal A hash table structure. */ > struct rte_hash; >=20 > @@ -287,7 +329,8 @@ rte_hash_add_key_with_hash(const struct rte_hash *h, = const void *key, hash_sig_t > * Thread safety can be enabled by setting flag during > * table creation. > * If RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL or > - * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, > + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled and > + * internal RCU is NOT enabled, > * the key index returned by rte_hash_add_key_xxx APIs will not be > * freed by this API. rte_hash_free_key_with_position API must be called > * additionally to free the index associated with the key. > @@ -316,7 +359,8 @@ rte_hash_del_key(const struct rte_hash *h, const void= *key); > * Thread safety can be enabled by setting flag during > * table creation. > * If RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL or > - * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, > + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled and > + * internal RCU is NOT enabled, > * the key index returned by rte_hash_add_key_xxx APIs will not be > * freed by this API. rte_hash_free_key_with_position API must be called > * additionally to free the index associated with the key. > @@ -370,7 +414,8 @@ rte_hash_get_key_with_position(const struct rte_hash = *h, const int32_t position, > * only be called from one thread by default. Thread safety > * can be enabled by setting flag during table creation. > * If RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL or > - * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, > + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled and > + * internal RCU is NOT enabled, > * the key index returned by rte_hash_del_key_xxx APIs must be freed > * using this API. This API should be called after all the readers > * have stopped referencing the entry corresponding to this key. > @@ -625,6 +670,28 @@ rte_hash_lookup_bulk(const struct rte_hash *h, const= void **keys, > */ > int32_t > rte_hash_iterate(const struct rte_hash *h, const void **key, void **data,= uint32_t *next); > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice > + * > + * Associate RCU QSBR variable with an Hash object. > + * > + * @param h > + * the hash object to add RCU QSBR > + * @param cfg > + * RCU QSBR configuration > + * @return > + * On success - 0 > + * On error - 1 with error code set in rte_errno. > + * Possible rte_errno codes are: > + * - EINVAL - invalid pointer > + * - EEXIST - already added QSBR > + * - ENOMEM - memory allocation failure > + */ > +__rte_experimental > +int rte_hash_rcu_qsbr_add(struct rte_hash *h, > + struct rte_hash_rcu_config *cfg); > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_hash/rte_hash_version.map b/lib/librte_hash/rte_h= ash_version.map > index c0db81014ff9..c6d73080f478 100644 > --- a/lib/librte_hash/rte_hash_version.map > +++ b/lib/librte_hash/rte_hash_version.map > @@ -36,5 +36,5 @@ EXPERIMENTAL { > rte_hash_lookup_with_hash_bulk; > rte_hash_lookup_with_hash_bulk_data; > rte_hash_max_key_id; > - > + rte_hash_rcu_qsbr_add; > }; > --=20 > 2.17.1 >=20