From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00FD2A04DD; Wed, 21 Oct 2020 06:53:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4EDD6AC79; Wed, 21 Oct 2020 06:53:13 +0200 (CEST) Received: from EUR02-VE1-obe.outbound.protection.outlook.com (mail-eopbgr20044.outbound.protection.outlook.com [40.107.2.44]) by dpdk.org (Postfix) with ESMTP id D4336AC78 for ; Wed, 21 Oct 2020 06:53:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m/l1NWfISgFzqwSoCPehKivJLLD7Bq9AAUc/QLiD+No=; b=J/eyIxk80qoFHAzlnGOxtAFGLGi4ROQC/bkb1VuJkE33SWtFoMnu+w3qYgCZTlMhrR0jTvrEy0UG0E6q7cKrByMRCNyZmNPsalbbUXxc/vHUR0iEr53y0yRomdMLUh/dPp8pV0isQcJWUp+vhVDnXDx008F16LZkZsbOPBeOpqU= Received: from AM6P193CA0113.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::18) by AM4PR0802MB2372.eurprd08.prod.outlook.com (2603:10a6:200:60::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 21 Oct 2020 04:53:08 +0000 Received: from AM5EUR03FT049.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:85:cafe::68) by AM6P193CA0113.outlook.office365.com (2603:10a6:209:85::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 04:53:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM5EUR03FT049.mail.protection.outlook.com (10.152.17.130) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 04:53:07 +0000 Received: ("Tessian outbound 7c188528bfe0:v64"); Wed, 21 Oct 2020 04:53:07 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: 0a32008c0652aa6e X-CR-MTA-TID: 64aa7808 Received: from 67d28ef789fa.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 96D1F84C-B776-40F7-A5F3-F53E957F615E.1; Wed, 21 Oct 2020 04:53:02 +0000 Received: from EUR02-VE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 67d28ef789fa.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 21 Oct 2020 04:53:02 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DKUmN5r81yIbRGtgsuULyVdNt0/E1V+kD5SwnVvuEQAoy/19t9WclD+K6+IgoiDog2B2uqc0vQvW8V4pZMgn1XJpzyqo1nKHk56kREpAfSySVjgEQZyuXrjFFBD2+aqKuyzxb4Mma1x5KfnCT6OGnKoAt9hE+q+YqdvHJsOwVizoL+PuppO5hie4uyCM9R/6We5tkVv5aF+oqfZQ6+0RHUvRzeHVdheGnGeuy8/5CGCEFooMRCIg4G2grwWr8j8iNZbH9BbM/J+8omnnUr2VmGIBBe1fNlfMLiM/rLw0jIW4j5/I/zHVTw2LqfAMgqVvFyoEcoB+mb4Ce9jun+Jn4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m/l1NWfISgFzqwSoCPehKivJLLD7Bq9AAUc/QLiD+No=; b=AQ2hS0LAUz6olzbuF3rwnHnF2z2iTFvrcj5bOYMIVn12dtabA/3aIUmz8i/VAdK0rEYx4uKg0X+swJHB1OXfK8OJrp41sLloO9VzZwlPmVqApGIZZZyB6DhL4CTzTPc1o251K2yfM1I/gKB9iJ1HkCWanbPEJJlIOZFbPRHPCmp5vWduEKOg8M05QSqKrljYRrEUj+j5tbKABxHLE5D9OLphvjTqbaxMbdWjZLspPpuANPAYeXEyELo4vruqn/3oEZNkWD9JxJAQhOEqUU/qg+VC8GM/yDYfOsA5SpkrfoafS9m15yz2Zu3Dc/A21+7OQd8+iXxtLo9KGs9G7tQt6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m/l1NWfISgFzqwSoCPehKivJLLD7Bq9AAUc/QLiD+No=; b=J/eyIxk80qoFHAzlnGOxtAFGLGi4ROQC/bkb1VuJkE33SWtFoMnu+w3qYgCZTlMhrR0jTvrEy0UG0E6q7cKrByMRCNyZmNPsalbbUXxc/vHUR0iEr53y0yRomdMLUh/dPp8pV0isQcJWUp+vhVDnXDx008F16LZkZsbOPBeOpqU= Received: from VI1PR08MB4622.eurprd08.prod.outlook.com (2603:10a6:803:bc::17) by VI1PR0802MB2224.eurprd08.prod.outlook.com (2603:10a6:800:9f::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 21 Oct 2020 04:52:59 +0000 Received: from VI1PR08MB4622.eurprd08.prod.outlook.com ([fe80::e9cc:b2a4:eb77:980b]) by VI1PR08MB4622.eurprd08.prod.outlook.com ([fe80::e9cc:b2a4:eb77:980b%7]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020 04:52:59 +0000 From: Dharmik Thakkar To: "Wang, Yipeng1" CC: "Gobriel, Sameh" , "Richardson, Bruce" , Ray Kinsella , Neil Horman , "dev@dpdk.org" , nd Thread-Topic: [PATCH v5 2/4] lib/hash: integrate RCU QSBR Thread-Index: AQHWpvv2PKlA6MmZ7UuFBSPJYhXhi6mhWa+AgAAkbgA= Date: Wed, 21 Oct 2020 04:52:58 +0000 Message-ID: <72DB22D2-34AF-41BD-ACAC-7C2DF3121BCC@arm.com> References: <20201019163519.28180-1-dharmik.thakkar@arm.com> <20201020161301.7458-1-dharmik.thakkar@arm.com> <20201020161301.7458-3-dharmik.thakkar@arm.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Authentication-Results-Original: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=arm.com; x-originating-ip: [217.140.110.7] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: b2574d40-c26a-4b1d-36ba-08d8757d343c x-ms-traffictypediagnostic: VI1PR0802MB2224:|AM4PR0802MB2372: X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true nodisclaimer: true x-ms-oob-tlc-oobclassifiers: OLM:338;OLM:338; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: 2QOBibifvVX73iGJYjpKT+wC2ROaHCnd90LMNnJs5OHtZUj9/b2tEH6HSKrBLyfjHgYnrkYSqo0AvUtHIitPMP/Y0cugy6hmlyH70upSovh431UZeJZKlDJ8HSJaWynMQ91X9OXhOMuAaES2okjl8GeJ22bfNnkrTw7lzdYVtbjedmRIisBqrbf2nCSux3kW2opIZa5bB//q9HqFZfDsWX5w5YrON/jLKWOSW5O/1t5odVb3iEB6n7LPfUVOiPl5hG867agd6CsqLrT/sXaSgdGtvNmpffdCZsfPMsdqcl4b/eaWYTE2ttXS/fi9XZRuGr0uI1yUEFfh7gHhcn5MVVO+MNIFL9F5EwQFHMG8DK5a1j0fxmtn/p9liUVOyj4M X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB4622.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(8676002)(8936002)(478600001)(6486002)(91956017)(53546011)(26005)(6506007)(4326008)(66946007)(2616005)(6512007)(186003)(64756008)(66556008)(66476007)(76116006)(66446008)(6916009)(83380400001)(33656002)(54906003)(316002)(71200400001)(2906002)(86362001)(5660300002)(30864003)(36756003)(21314003)(559001)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: RvSSLBYP9aNu4UZkVWMl5ZwXvVc3+bIRw79zDHyYZHTuyRZy1tWfZNEAf/LdWNNRRRSK92tWJPRz8XC8VHLNKDWJO+N/gi/ytSRviJcydMNzwOi2ggQZ5zAqMsmRwlkQ1UxJVe6g2SsLGy3hygD4Ya1ItbB1FflFH2aMu/+Nx2h5P9sQa1dbITp0oaQvWwCACGXDPoDozTzRqQAOr2YXcireSgFxOdrQ4WdBn/FldxoYEhbT8ytRcxbyFIa98uqAuvOOMHib2FXRptt59maVGDrcutOiiMkKzUimSVdK3Jy19OByTTvuo4B981k90MOA9/IfD9wMStl+tHv6K1ajeMxP0wdkAY3IeojXtJU5Vy76WgQHZc6JC9MIQWrJDwwDTjRo4CR4RFka7E1onkxnxwMnN9ZIpGsDU5EUjaqOLb/yYeVjXhQixEGn/Pmo4QYpvp6ddG4FhOTRvGOaRi4K+8zX9VaDcsUU0H/HK98TlYOYQz43kdZyFM9A20Xc9tv3IuX9PktMTsjVL77HMmO8ijKXrd028h66XtRS5vd564uZf5+9pxfM4f+PGO9mH8gEf1yf891/msD3zeW1ck20Y8g3relAF90lNLUO9FExlZD3TByq9ZunB81nU428RcVSTpYi9QxDeL0FJOYj7W7Rdg== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-ID: <286A549AF1FF2C42B5AF993BB3FA9D76@eurprd08.prod.outlook.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2224 Original-Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT049.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: 70e580e8-70ad-4935-9378-08d8757d2ee1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YBdDhdU97fiL5bDmWvri3RYhC3c2dQwD2bumn5Q7PYJTwZu9vlmHx06KXxkAyGUjIFYgxzQpcAGbqTt1tS+e4pUQI3dmrDiOvdXviOskY+qCxktpaIQ7+PAyyLY9pzF43p9OOEv2Pea/Yiu/78skWWA380VIUm6nntDHIk+OcnsLILnnM2N1jhjYebRgQmuqcRxDwSwwHX1JWu7JZYXyhCYkVcTlnHf8AOc9TcFPHl4785jnfOqxuW2HLhmpKOCBWkDCPGRu7k3GSlajvfyXbT1DsqoJRuWx/hMAzFnlqODi6oFJKPyCXb5S/X/XQsOy8jUeFwOWtwOAO0DMtpf0IItx0hPrqjWeDnTktZJYs+JsvVy2gakLC1XRV3GwXnFTeY3KBLJ9YUAkDp/QO1NR/+89ZpCwbtWipc8vatNRYog= X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(46966005)(336012)(36906005)(2616005)(36756003)(86362001)(316002)(6512007)(2906002)(186003)(8676002)(6862004)(54906003)(8936002)(26005)(4326008)(53546011)(6506007)(6486002)(30864003)(33656002)(478600001)(47076004)(83380400001)(81166007)(82310400003)(356005)(82740400003)(70586007)(5660300002)(70206006)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Oct 2020 04:53:07.9848 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b2574d40-c26a-4b1d-36ba-08d8757d343c X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT049.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2372 Subject: Re: [dpdk-dev] [PATCH v5 2/4] lib/hash: integrate RCU QSBR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > On Oct 20, 2020, at 9:42 PM, Wang, Yipeng1 wrote= : >=20 >> -----Original Message----- >> From: Dharmik Thakkar >> Sent: Tuesday, October 20, 2020 9:13 AM >> To: Wang, Yipeng1 ; Gobriel, Sameh >> ; Richardson, Bruce >> ; Ray Kinsella ; Neil Horman >> >> Cc: dev@dpdk.org; nd@arm.com; Dharmik Thakkar >> >> Subject: [PATCH v5 2/4] lib/hash: integrate RCU QSBR >>=20 >> Currently, users have to use external RCU mechanisms to free resources >> when using lock free hash algorithm. >>=20 >> Integrate RCU QSBR process to make it easier for the applications to use= lock >> free algorithm. >> Refer to RCU documentation to understand various aspects of integrating = RCU >> library into other libraries. >>=20 >> Suggested-by: Honnappa Nagarahalli >> Signed-off-by: Dharmik Thakkar >> Reviewed-by: Ruifeng Wang >> Acked-by: Ray Kinsella >> --- >> doc/guides/prog_guide/hash_lib.rst | 11 +- >> lib/librte_hash/meson.build | 1 + >> lib/librte_hash/rte_cuckoo_hash.c | 302 ++++++++++++++++++++++------- >> lib/librte_hash/rte_cuckoo_hash.h | 8 + >> lib/librte_hash/rte_hash.h | 77 +++++++- >> lib/librte_hash/version.map | 2 +- >> 6 files changed, 325 insertions(+), 76 deletions(-) >>=20 >> diff --git a/doc/guides/prog_guide/hash_lib.rst >> b/doc/guides/prog_guide/hash_lib.rst >> index d06c7de2ead1..63e183ed1f08 100644 >> --- a/doc/guides/prog_guide/hash_lib.rst >> +++ b/doc/guides/prog_guide/hash_lib.rst >> @@ -102,6 +102,9 @@ For concurrent writes, and concurrent reads and >> writes the following flag values >> * If the 'do not free on delete' (RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL) >> flag is set, the position of the entry in the hash table is not freed up= on calling >> delete(). This flag is enabled >> by default when the lock free read/write concurrency flag is set. The >> application should free the position after all the readers have stopped >> referencing the position. >> Where required, the application can make use of RCU mechanisms to >> determine when the readers have stopped referencing the position. >> + RCU QSBR process is integrated within the Hash library for safe free= ing of >> the position. Application has certain responsibilities while using this = feature. >> + Please refer to resource reclamation framework of :ref:`RCU library >> ` for more details. >=20 > [Yipeng]: Maybe also add: rte_hash_rcu_qsbr_add() need to be called to us= e the embedded RCU mechanism. > Just to give user a pointer to which API to look. Copy. >> + >>=20 >> Extendable Bucket Functionality support >> ---------------------------------------- >> @@ -109,8 +112,8 @@ An extra flag is used to enable this functionality (= flag >> is not set by default). >> in the very unlikely case due to excessive hash collisions that a key ha= s failed >> to be inserted, the hash table bucket is extended with a linked list to= insert >> these failed keys. This feature is important for the workloads (e.g. tel= co >> workloads) that need to insert up to 100% of the hash table size and ca= n't >> tolerate any key insertion failure (even if very few). >> -Please note that with the 'lock free read/write concurrency' flag enabl= ed, >> users need to call 'rte_hash_free_key_with_position' API in order to fre= e the >> empty buckets and -deleted keys, to maintain the 100% capacity guarantee= . >> +Please note that with the 'lock free read/write concurrency' flag >> +enabled, users need to call 'rte_hash_free_key_with_position' API or >> configure integrated RCU QSBR (or use external RCU mechanisms) in order = to >> free the empty buckets and deleted keys, to maintain the 100% capacity >> guarantee. >>=20 >> Implementation Details (non Extendable Bucket Case) >> --------------------------------------------------- >> @@ -172,7 +175,7 @@ Example of deletion: >> Similar to lookup, the key is searched in its primary and secondary buck= ets. If >> the key is found, the entry is marked as empty. If the hash table was >> configured with 'no free on delete' or 'lock free read/write concurrency= ', the >> position of the key is not freed. It is the responsibility of the user t= o free the >> position after -readers are not referencing the position anymore. >> +readers are not referencing the position anymore. User can configure >> +integrated RCU QSBR or use external RCU mechanisms to safely free the >> +position on delete >>=20 >>=20 >> Implementation Details (with Extendable Bucket) @@ -286,6 +289,8 @@ The >> flow table operations on the application side are described below: >> * Free flow: Free flow key position. If 'no free on delete' or 'lock-f= ree >> read/write concurrency' flags are set, >> wait till the readers are not referencing the position returned duri= ng >> add/delete flow and then free the position. >> RCU mechanisms can be used to find out when the readers are not >> referencing the position anymore. >> + RCU QSBR process is integrated within the Hash library for safe fre= eing of >> the position. Application has certain responsibilities while using this = feature. >> + Please refer to resource reclamation framework of :ref:`RCU library >> ` for more details. >>=20 >> * Lookup flow: Lookup for the flow key in the hash. >> If the returned position is valid (flow lookup hit), use the returne= d position >> to access the flow entry in the flow table. >> diff --git a/lib/librte_hash/meson.build b/lib/librte_hash/meson.build i= ndex >> 6ab46ae9d768..0977a63fd279 100644 >> --- a/lib/librte_hash/meson.build >> +++ b/lib/librte_hash/meson.build >> @@ -10,3 +10,4 @@ headers =3D files('rte_crc_arm64.h', >>=20 >> sources =3D files('rte_cuckoo_hash.c', 'rte_fbk_hash.c') deps +=3D ['ri= ng'] >> +deps +=3D ['rcu'] >> diff --git a/lib/librte_hash/rte_cuckoo_hash.c >> b/lib/librte_hash/rte_cuckoo_hash.c >> index aad0c965be5e..b9e4d82a0c14 100644 >> --- a/lib/librte_hash/rte_cuckoo_hash.c >> +++ b/lib/librte_hash/rte_cuckoo_hash.c >> @@ -52,6 +52,11 @@ static struct rte_tailq_elem rte_hash_tailq =3D { }; >> EAL_REGISTER_TAILQ(rte_hash_tailq) >>=20 >> +struct __rte_hash_rcu_dq_entry { >> + uint32_t key_idx; >> + uint32_t ext_bkt_idx; /**< Extended bkt index */ }; >> + >> struct rte_hash * >> rte_hash_find_existing(const char *name) { @@ -210,7 +215,10 @@ >> rte_hash_create(const struct rte_hash_parameters *params) >>=20 >> if (params->extra_flag & >> RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF) { >> readwrite_concur_lf_support =3D 1; >> - /* Enable not freeing internal memory/index on delete */ >> + /* Enable not freeing internal memory/index on delete. >> + * If internal RCU is enabled, freeing of internal memory/index >> + * is done on delete >> + */ >> no_free_on_del =3D 1; >> } >>=20 >> @@ -505,6 +513,10 @@ rte_hash_free(struct rte_hash *h) >>=20 >> rte_mcfg_tailq_write_unlock(); >>=20 >> + /* RCU clean up. */ >> + if (h->dq) >> + rte_rcu_qsbr_dq_delete(h->dq); >> + >> if (h->use_local_cache) >> rte_free(h->local_free_slots); >> if (h->writer_takes_lock) >> @@ -607,11 +619,21 @@ void >> rte_hash_reset(struct rte_hash *h) >> { >> uint32_t tot_ring_cnt, i; >> + unsigned int pending; >>=20 >> if (h =3D=3D NULL) >> return; >>=20 >> __hash_rw_writer_lock(h); >> + >> + /* RCU QSBR clean up. */ >> + if (h->dq) { >> + /* Reclaim all the resources */ >> + rte_rcu_qsbr_dq_reclaim(h->dq, ~0, NULL, &pending, NULL); >> + if (pending !=3D 0) >> + RTE_LOG(ERR, HASH, "RCU reclaim all resources >> failed\n"); >> + } >> + >> memset(h->buckets, 0, h->num_buckets * sizeof(struct >> rte_hash_bucket)); >> memset(h->key_store, 0, h->key_entry_size * (h->entries + 1)); >> *h->tbl_chng_cnt =3D 0; >> @@ -952,6 +974,37 @@ rte_hash_cuckoo_make_space_mw(const struct >> rte_hash *h, >> return -ENOSPC; >> } >>=20 >> +static inline uint32_t >> +alloc_slot(const struct rte_hash *h, struct lcore_cache >> +*cached_free_slots) { >> + unsigned int n_slots; >> + uint32_t slot_id; >=20 > [Yipeng]: Blank line after variable declaration. Copy. >=20 >> + if (h->use_local_cache) { >> + /* Try to get a free slot from the local cache */ >> + if (cached_free_slots->len =3D=3D 0) { >> + /* Need to get another burst of free slots from global >> ring */ >> + n_slots =3D rte_ring_mc_dequeue_burst_elem(h- >>> free_slots, >> + cached_free_slots->objs, >> + sizeof(uint32_t), >> + LCORE_CACHE_SIZE, NULL); >> + if (n_slots =3D=3D 0) >> + return EMPTY_SLOT; >> + >> + cached_free_slots->len +=3D n_slots; >> + } >> + >> + /* Get a free slot from the local cache */ >> + cached_free_slots->len--; >> + slot_id =3D cached_free_slots->objs[cached_free_slots->len]; >> + } else { >> + if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, >> + sizeof(uint32_t)) !=3D 0) >> + return EMPTY_SLOT; >> + } >> + >> + return slot_id; >> +} >> + >> static inline int32_t >> __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, >> hash_sig_t sig, void *data) >> @@ -963,7 +1016,6 @@ __rte_hash_add_key_with_hash(const struct >> rte_hash *h, const void *key, >> uint32_t ext_bkt_id =3D 0; >> uint32_t slot_id; >> int ret; >> - unsigned n_slots; >> unsigned lcore_id; >> unsigned int i; >> struct lcore_cache *cached_free_slots =3D NULL; @@ -1001,28 >> +1053,20 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, >> const void *key, >> if (h->use_local_cache) { >> lcore_id =3D rte_lcore_id(); >> cached_free_slots =3D &h->local_free_slots[lcore_id]; >> - /* Try to get a free slot from the local cache */ >> - if (cached_free_slots->len =3D=3D 0) { >> - /* Need to get another burst of free slots from global >> ring */ >> - n_slots =3D rte_ring_mc_dequeue_burst_elem(h- >>> free_slots, >> - cached_free_slots->objs, >> - sizeof(uint32_t), >> - LCORE_CACHE_SIZE, NULL); >> - if (n_slots =3D=3D 0) { >> - return -ENOSPC; >> - } >> - >> - cached_free_slots->len +=3D n_slots; >> + } >> + slot_id =3D alloc_slot(h, cached_free_slots); >> + if (slot_id =3D=3D EMPTY_SLOT) { >> + if (h->dq) { >> + __hash_rw_writer_lock(h); >> + ret =3D rte_rcu_qsbr_dq_reclaim(h->dq, >> + h->hash_rcu_cfg->max_reclaim_size, >> + NULL, NULL, NULL); >> + __hash_rw_writer_unlock(h); >> + if (ret =3D=3D 0) >> + slot_id =3D alloc_slot(h, cached_free_slots); >> } >> - >> - /* Get a free slot from the local cache */ >> - cached_free_slots->len--; >> - slot_id =3D cached_free_slots->objs[cached_free_slots->len]; >> - } else { >> - if (rte_ring_sc_dequeue_elem(h->free_slots, &slot_id, >> - sizeof(uint32_t)) !=3D 0) { >> + if (slot_id =3D=3D EMPTY_SLOT) >> return -ENOSPC; >> - } >> } >>=20 >> new_k =3D RTE_PTR_ADD(keys, slot_id * h->key_entry_size); @@ - >> 1118,8 +1162,19 @@ __rte_hash_add_key_with_hash(const struct rte_hash >> *h, const void *key, >> if (rte_ring_sc_dequeue_elem(h->free_ext_bkts, &ext_bkt_id, >> sizeof(uint32_t)) !=3D 0 || >> ext_bkt_id =3D=3D 0) { >> - ret =3D -ENOSPC; >> - goto failure; >> + if (h->dq) { >> + if (rte_rcu_qsbr_dq_reclaim(h->dq, >> + h->hash_rcu_cfg->max_reclaim_size, >> + NULL, NULL, NULL) =3D=3D 0) { >> + rte_ring_sc_dequeue_elem(h- >>> free_ext_bkts, >> + &ext_bkt_id, >> + sizeof(uint32_t)); >> + } >> + } >> + if (ext_bkt_id =3D=3D 0) { >> + ret =3D -ENOSPC; >> + goto failure; >> + } >> } >>=20 >> /* Use the first location of the new bucket */ @@ -1395,12 +1450,12 >> @@ rte_hash_lookup_data(const struct rte_hash *h, const void *key, void >> **data) >> return __rte_hash_lookup_with_hash(h, key, rte_hash_hash(h, key), >> data); } >>=20 >> -static inline void >> -remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, >> unsigned i) >> +static int >> +free_slot(const struct rte_hash *h, uint32_t slot_id) >> { >> unsigned lcore_id, n_slots; >> - struct lcore_cache *cached_free_slots; >> - >> + struct lcore_cache *cached_free_slots =3D NULL; >> + /* Return key indexes to free slot ring */ >> if (h->use_local_cache) { >> lcore_id =3D rte_lcore_id(); >> cached_free_slots =3D &h->local_free_slots[lcore_id]; @@ - >> 1411,18 +1466,127 @@ remove_entry(const struct rte_hash *h, struct >> rte_hash_bucket *bkt, unsigned i) >> cached_free_slots->objs, >> sizeof(uint32_t), >> LCORE_CACHE_SIZE, NULL); >> - ERR_IF_TRUE((n_slots =3D=3D 0), >> - "%s: could not enqueue free slots in global >> ring\n", >> - __func__); >> + RETURN_IF_TRUE((n_slots =3D=3D 0), -EFAULT); >> cached_free_slots->len -=3D n_slots; >> } >> - /* Put index of new free slot in cache. */ >> - cached_free_slots->objs[cached_free_slots->len] =3D >> - bkt->key_idx[i]; >> - cached_free_slots->len++; >> + } >> + >> + enqueue_slot_back(h, cached_free_slots, slot_id); >> + return 0; >> +} >> + >> +static void >> +__hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n) { >> + void *key_data =3D NULL; >> + int ret; >> + struct rte_hash_key *keys, *k; >> + struct rte_hash *h =3D (struct rte_hash *)p; >> + struct __rte_hash_rcu_dq_entry rcu_dq_entry =3D >> + *((struct __rte_hash_rcu_dq_entry *)e); >> + >> + RTE_SET_USED(n); >> + keys =3D h->key_store; >> + >> + k =3D (struct rte_hash_key *) ((char *)keys + >> + rcu_dq_entry.key_idx * h->key_entry_size); >> + key_data =3D k->pdata; >> + if (h->hash_rcu_cfg->free_key_data_func) >> + h->hash_rcu_cfg->free_key_data_func(h->hash_rcu_cfg- >>> key_data_ptr, >> + key_data); >> + >> + if (h->ext_table_support && rcu_dq_entry.ext_bkt_idx !=3D >> EMPTY_SLOT) >> + /* Recycle empty ext bkt to free list. */ >> + rte_ring_sp_enqueue_elem(h->free_ext_bkts, >> + &rcu_dq_entry.ext_bkt_idx, sizeof(uint32_t)); >> + >> + /* Return key indexes to free slot ring */ >> + ret =3D free_slot(h, rcu_dq_entry.key_idx); >> + if (ret < 0) { >> + RTE_LOG(ERR, HASH, >> + "%s: could not enqueue free slots in global ring\n", >> + __func__); >> + } >> +} >> + >> +int >> +rte_hash_rcu_qsbr_add(struct rte_hash *h, >> + struct rte_hash_rcu_config *cfg) >> +{ >> + struct rte_rcu_qsbr_dq_parameters params =3D {0}; >> + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE]; >> + struct rte_hash_rcu_config *hash_rcu_cfg =3D NULL; >> + >> + const uint32_t total_entries =3D h->use_local_cache ? >> + h->entries + (RTE_MAX_LCORE - 1) * (LCORE_CACHE_SIZE - 1) >> + 1 >> + : h->entries + 1; >> + >> + if ((h =3D=3D NULL) || cfg =3D=3D NULL || cfg->v =3D=3D NULL) { >> + rte_errno =3D EINVAL; >> + return 1; >> + } >> + >> + if (h->hash_rcu_cfg) { >> + rte_errno =3D EEXIST; >> + return 1; >> + } >> + >> + hash_rcu_cfg =3D rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), >> 0); >> + if (hash_rcu_cfg =3D=3D NULL) { >> + RTE_LOG(ERR, HASH, "memory allocation failed\n"); >> + return 1; >> + } >> + >> + if (cfg->mode =3D=3D RTE_HASH_QSBR_MODE_SYNC) { >> + /* No other things to do. */ >> + } else if (cfg->mode =3D=3D RTE_HASH_QSBR_MODE_DQ) { >> + /* Init QSBR defer queue. */ >> + snprintf(rcu_dq_name, sizeof(rcu_dq_name), >> + "HASH_RCU_%s", h->name); >> + params.name =3D rcu_dq_name; >> + params.size =3D cfg->dq_size; >> + if (params.size =3D=3D 0) >> + params.size =3D total_entries; >> + params.trigger_reclaim_limit =3D cfg->trigger_reclaim_limit; >> + if (params.max_reclaim_size =3D=3D 0) >> + params.max_reclaim_size =3D >> RTE_HASH_RCU_DQ_RECLAIM_MAX; >> + params.esize =3D sizeof(struct __rte_hash_rcu_dq_entry); >> + params.free_fn =3D __hash_rcu_qsbr_free_resource; >> + params.p =3D h; >> + params.v =3D cfg->v; >> + h->dq =3D rte_rcu_qsbr_dq_create(¶ms); >> + if (h->dq =3D=3D NULL) { >> + rte_free(hash_rcu_cfg); >> + RTE_LOG(ERR, HASH, "HASH defer queue creation >> failed\n"); >> + return 1; >> + } >> } else { >> - rte_ring_sp_enqueue_elem(h->free_slots, >> - &bkt->key_idx[i], sizeof(uint32_t)); >> + rte_free(hash_rcu_cfg); >> + rte_errno =3D EINVAL; >> + return 1; >> + } >> + >> + hash_rcu_cfg->v =3D cfg->v; >> + hash_rcu_cfg->mode =3D cfg->mode; >> + hash_rcu_cfg->dq_size =3D params.size; >> + hash_rcu_cfg->trigger_reclaim_limit =3D params.trigger_reclaim_limit; >> + hash_rcu_cfg->max_reclaim_size =3D params.max_reclaim_size; >> + hash_rcu_cfg->free_key_data_func =3D cfg->free_key_data_func; >> + hash_rcu_cfg->key_data_ptr =3D cfg->key_data_ptr; >> + >> + h->hash_rcu_cfg =3D hash_rcu_cfg; >> + >> + return 0; >> +} >> + >> +static inline void >> +remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, >> +unsigned i) { >> + int ret =3D free_slot(h, bkt->key_idx[i]); >> + if (ret < 0) { >> + RTE_LOG(ERR, HASH, >> + "%s: could not enqueue free slots in global ring\n", >> + __func__); >> } >> } >>=20 >> @@ -1521,6 +1685,8 @@ __rte_hash_del_key_with_hash(const struct >> rte_hash *h, const void *key, >> int pos; >> int32_t ret, i; >> uint16_t short_sig; >> + uint32_t index =3D EMPTY_SLOT; >> + struct __rte_hash_rcu_dq_entry rcu_dq_entry; >>=20 >> short_sig =3D get_short_sig(sig); >> prim_bucket_idx =3D get_prim_bucket_index(h, sig); @@ -1555,10 >> +1721,9 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const >> void *key, >>=20 >> /* Search last bucket to see if empty to be recycled */ >> return_bkt: >> - if (!last_bkt) { >> - __hash_rw_writer_unlock(h); >> - return ret; >> - } >> + if (!last_bkt) >> + goto return_key; >> + >> while (last_bkt->next) { >> prev_bkt =3D last_bkt; >> last_bkt =3D last_bkt->next; >> @@ -1571,11 +1736,11 @@ __rte_hash_del_key_with_hash(const struct >> rte_hash *h, const void *key, >> /* found empty bucket and recycle */ >> if (i =3D=3D RTE_HASH_BUCKET_ENTRIES) { >> prev_bkt->next =3D NULL; >> - uint32_t index =3D last_bkt - h->buckets_ext + 1; >> + index =3D last_bkt - h->buckets_ext + 1; >> /* Recycle the empty bkt if >> * no_free_on_del is disabled. >> */ >> - if (h->no_free_on_del) >> + if (h->no_free_on_del) { >> /* Store index of an empty ext bkt to be recycled >> * on calling rte_hash_del_xxx APIs. >> * When lock free read-write concurrency is enabled, >> @@ -1583,12 +1748,34 @@ __rte_hash_del_key_with_hash(const struct >> rte_hash *h, const void *key, >> * immediately (as readers might be using it still). >> * Hence freeing of the ext bkt is piggy-backed to >> * freeing of the key index. >> + * If using external RCU, store this index in an array. >> */ >> - h->ext_bkt_to_free[ret] =3D index; >> - else >> + if (h->hash_rcu_cfg =3D=3D NULL) >> + h->ext_bkt_to_free[ret] =3D index; >=20 >=20 > [Yipeng]: If using embedded qsbr (not NULL), how did you recycle the ext = bkt? In DQ mode, ext_bkt_idx is enqueued to the defer queue along with the key_i= dx and recycled as part of rte_rcu_qsbr_dq_reclaim(). In SYNC mode, it is recycled after rte_r= cu_qsbr_synchronize() succeeds. >=20 >> + } else >> rte_ring_sp_enqueue_elem(h->free_ext_bkts, >> &index, >> sizeof(uint32_t)); >> } >> + >> +return_key: >> + /* Using internal RCU QSBR */ >> + if (h->hash_rcu_cfg) { >> + /* Key index where key is stored, adding the first dummy >> index */ >> + rcu_dq_entry.key_idx =3D ret + 1; >> + rcu_dq_entry.ext_bkt_idx =3D index; >> + if (h->dq =3D=3D NULL) { >> + /* Wait for quiescent state change if using >> + * RTE_HASH_QSBR_MODE_SYNC >> + */ >> + rte_rcu_qsbr_synchronize(h->hash_rcu_cfg->v, >> + RTE_QSBR_THRID_INVALID); >> + __hash_rcu_qsbr_free_resource((void >> *)((uintptr_t)h), >> + &rcu_dq_entry, 1); >> + } else if (h->dq) >> + /* Push into QSBR FIFO if using >> RTE_HASH_QSBR_MODE_DQ */ >> + if (rte_rcu_qsbr_dq_enqueue(h->dq, >> &rcu_dq_entry) !=3D 0) >> + RTE_LOG(ERR, HASH, "Failed to push QSBR >> FIFO\n"); >> + } >> __hash_rw_writer_unlock(h); >> return ret; >> } >> @@ -1637,8 +1824,6 @@ rte_hash_free_key_with_position(const struct >> rte_hash *h, >>=20 >> RETURN_IF_TRUE(((h =3D=3D NULL) || (key_idx =3D=3D EMPTY_SLOT)), -EINVA= L); >>=20 >> - unsigned int lcore_id, n_slots; >> - struct lcore_cache *cached_free_slots; >> const uint32_t total_entries =3D h->use_local_cache ? >> h->entries + (RTE_MAX_LCORE - 1) * (LCORE_CACHE_SIZE - 1) >> + 1 >> : h->entries + 1; >> @@ -1656,28 +1841,9 @@ rte_hash_free_key_with_position(const struct >> rte_hash *h, >> } >> } >>=20 >> - if (h->use_local_cache) { >> - lcore_id =3D rte_lcore_id(); >> - cached_free_slots =3D &h->local_free_slots[lcore_id]; >> - /* Cache full, need to free it. */ >> - if (cached_free_slots->len =3D=3D LCORE_CACHE_SIZE) { >> - /* Need to enqueue the free slots in global ring. */ >> - n_slots =3D rte_ring_mp_enqueue_burst_elem(h- >>> free_slots, >> - cached_free_slots->objs, >> - sizeof(uint32_t), >> - LCORE_CACHE_SIZE, NULL); >> - RETURN_IF_TRUE((n_slots =3D=3D 0), -EFAULT); >> - cached_free_slots->len -=3D n_slots; >> - } >> - /* Put index of new free slot in cache. */ >> - cached_free_slots->objs[cached_free_slots->len] =3D key_idx; >> - cached_free_slots->len++; >> - } else { >> - rte_ring_sp_enqueue_elem(h->free_slots, &key_idx, >> - sizeof(uint32_t)); >> - } >> + /* Enqueue slot to cache/ring of free slots. */ >> + return free_slot(h, key_idx); >>=20 >> - return 0; >> } >>=20 >> static inline void >> diff --git a/lib/librte_hash/rte_cuckoo_hash.h >> b/lib/librte_hash/rte_cuckoo_hash.h >> index 345de6bf9cfd..85be49d3bbe7 100644 >> --- a/lib/librte_hash/rte_cuckoo_hash.h >> +++ b/lib/librte_hash/rte_cuckoo_hash.h >> @@ -168,6 +168,11 @@ struct rte_hash { >> struct lcore_cache *local_free_slots; >> /**< Local cache per lcore, storing some indexes of the free slots */ >>=20 >> + /* RCU config */ >> + struct rte_hash_rcu_config *hash_rcu_cfg; >> + /**< HASH RCU QSBR configuration structure */ >> + struct rte_rcu_qsbr_dq *dq; /**< RCU QSBR defer queue. */ >> + >> /* Fields used in lookup */ >>=20 >> uint32_t key_len __rte_cache_aligned; >> @@ -230,4 +235,7 @@ struct queue_node { >> int prev_slot; /* Parent(slot) in search path */ >> }; >>=20 >> +/** @internal Default RCU defer queue entries to reclaim in one go. */ >> +#define RTE_HASH_RCU_DQ_RECLAIM_MAX 16 >> + >> #endif >> diff --git a/lib/librte_hash/rte_hash.h b/lib/librte_hash/rte_hash.h ind= ex >> bff40251bc98..3d28f177f14a 100644 >> --- a/lib/librte_hash/rte_hash.h >> +++ b/lib/librte_hash/rte_hash.h >> @@ -15,6 +15,7 @@ >> #include >>=20 >> #include >> +#include >>=20 >> #ifdef __cplusplus >> extern "C" { >> @@ -45,7 +46,8 @@ extern "C" { >> /** Flag to disable freeing of key index on hash delete. >> * Refer to rte_hash_del_xxx APIs for more details. >> * This is enabled by default when >> RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF >> - * is enabled. >> + * is enabled. However, if internal RCU is enabled, freeing of internal >> + * memory/index is done on delete >> */ >> #define RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL 0x10 >>=20 >> @@ -67,6 +69,13 @@ typedef uint32_t (*rte_hash_function)(const void >> *key, uint32_t key_len, >> /** Type of function used to compare the hash key. */ typedef int >> (*rte_hash_cmp_eq_t)(const void *key1, const void *key2, size_t key_len)= ; >>=20 >> +/** >> + * Type of function used to free data stored in the key. >> + * Required when using internal RCU to allow application to free >> +key-data once >> + * the key is returned to the the ring of free key-slots. >> + */ >> +typedef void (*rte_hash_free_key_data)(void *p, void *key_data); >> + >> /** >> * Parameters used when creating the hash table. >> */ >> @@ -81,6 +90,39 @@ struct rte_hash_parameters { >> uint8_t extra_flag; /**< Indicate if additional parameters >> are present. */ >> }; >>=20 >> +/** RCU reclamation modes */ >> +enum rte_hash_qsbr_mode { >> + /** Create defer queue for reclaim. */ >> + RTE_HASH_QSBR_MODE_DQ =3D 0, >> + /** Use blocking mode reclaim. No defer queue created. */ >> + RTE_HASH_QSBR_MODE_SYNC >> +}; >> + >> +/** HASH RCU QSBR configuration structure. */ struct >> +rte_hash_rcu_config { >> + struct rte_rcu_qsbr *v; /**< RCU QSBR variable. */ >> + enum rte_hash_qsbr_mode mode; >> + /**< Mode of RCU QSBR. RTE_HASH_QSBR_MODE_xxx >> + * '0' for default: create defer queue for reclaim. >> + */ >> + uint32_t dq_size; >> + /**< RCU defer queue size. >> + * default: total hash table entries. >> + */ >> + uint32_t trigger_reclaim_limit; /**< Threshold to trigger auto reclaim= . >> */ >> + uint32_t max_reclaim_size; >> + /**< Max entries to reclaim in one go. >> + * default: RTE_HASH_RCU_DQ_RECLAIM_MAX. >> + */ >> + void *key_data_ptr; >> + /**< Pointer passed to the free function. Typically, this is the >> + * pointer to the data structure to which the resource to free >> + * (key-data) belongs. This can be NULL. >> + */ >> + rte_hash_free_key_data free_key_data_func; >> + /**< Function to call to free the resource (key-data). */ }; >> + >> /** @internal A hash table structure. */ struct rte_hash; >>=20 >> @@ -287,7 +329,8 @@ rte_hash_add_key_with_hash(const struct rte_hash >> *h, const void *key, hash_sig_t >> * Thread safety can be enabled by setting flag during >> * table creation. >> * If RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL or >> - * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, >> + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled and >> + * internal RCU is NOT enabled, >> * the key index returned by rte_hash_add_key_xxx APIs will not be >> * freed by this API. rte_hash_free_key_with_position API must be called >> * additionally to free the index associated with the key. >> @@ -316,7 +359,8 @@ rte_hash_del_key(const struct rte_hash *h, const >> void *key); >> * Thread safety can be enabled by setting flag during >> * table creation. >> * If RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL or >> - * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, >> + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled and >> + * internal RCU is NOT enabled, >> * the key index returned by rte_hash_add_key_xxx APIs will not be >> * freed by this API. rte_hash_free_key_with_position API must be called >> * additionally to free the index associated with the key. >> @@ -370,7 +414,8 @@ rte_hash_get_key_with_position(const struct >> rte_hash *h, const int32_t position, >> * only be called from one thread by default. Thread safety >> * can be enabled by setting flag during table creation. >> * If RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL or >> - * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled, >> + * RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF is enabled and >> + * internal RCU is NOT enabled, >> * the key index returned by rte_hash_del_key_xxx APIs must be freed >> * using this API. This API should be called after all the readers >> * have stopped referencing the entry corresponding to this key. >> @@ -625,6 +670,30 @@ rte_hash_lookup_bulk(const struct rte_hash *h, >> const void **keys, >> */ >> int32_t >> rte_hash_iterate(const struct rte_hash *h, const void **key, void **data= , >> uint32_t *next); >> + >> +/** >> + * @warning >> + * @b EXPERIMENTAL: this API may change without prior notice >> + * >> + * Associate RCU QSBR variable with an Hash object. > [Yipeng]: a Hash object Copy. >=20 >> + * This API should be called to enable the integrated RCU QSBR support >> +and >> + * should be called immediately after creating the Hash object. >> + * >> + * @param h >> + * the hash object to add RCU QSBR >> + * @param cfg >> + * RCU QSBR configuration >> + * @return >> + * On success - 0 >> + * On error - 1 with error code set in rte_errno. >> + * Possible rte_errno codes are: >> + * - EINVAL - invalid pointer >> + * - EEXIST - already added QSBR >> + * - ENOMEM - memory allocation failure >> + */ >> +__rte_experimental >> +int rte_hash_rcu_qsbr_add(struct rte_hash *h, >> + struct rte_hash_rcu_config *cfg); >> #ifdef __cplusplus >> } >> #endif >> diff --git a/lib/librte_hash/version.map b/lib/librte_hash/version.map i= ndex >> c0db81014ff9..c6d73080f478 100644 >> --- a/lib/librte_hash/version.map >> +++ b/lib/librte_hash/version.map >> @@ -36,5 +36,5 @@ EXPERIMENTAL { >> rte_hash_lookup_with_hash_bulk; >> rte_hash_lookup_with_hash_bulk_data; >> rte_hash_max_key_id; >> - >> + rte_hash_rcu_qsbr_add; >> }; >> -- >> 2.17.1 > [Yipeng]: >=20 > Hi, Dharmik, thanks for the work! It generally looks good. > Just some minor issues to address and one question for the ext table inli= ned. Thank you for the comments, Yipeng! I will update the patches. >=20 > Thanks!