From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 4A08EA04F2;
	Mon,  8 Jun 2020 20:46:34 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 59F6E2C23;
	Mon,  8 Jun 2020 20:46:33 +0200 (CEST)
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 (mail-eopbgr50044.outbound.protection.outlook.com [40.107.5.44])
 by dpdk.org (Postfix) with ESMTP id 2B3732C16
 for <dev@dpdk.org>; Mon,  8 Jun 2020 20:46:32 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=715POScN45n8u2rYCprp/+k8bbSu0wCu3DOPx7AKAt4=;
 b=GVhWa7heU26WGC1zvKnj9ZY66q69i00Ich81txTo8hYD5Fsrp2NYgFy+mszlnqkyHoJIFZswwu+AGC5RCvef/6po9WuWlQYHg2TdohFk5tZJi/BSYQv8EDkADtJP0snIozDfj9VIKPISD9ZzsXQXWCSn9kEFvbnMRzKrgFJtcH8=
Received: from MRXP264CA0004.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::16)
 by AM6PR08MB2982.eurprd08.prod.outlook.com (2603:10a6:209:43::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.18; Mon, 8 Jun
 2020 18:46:30 +0000
Received: from VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::ea) by MRXP264CA0004.outlook.office365.com
 (2603:10a6:500:15::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.18 via Frontend
 Transport; Mon, 8 Jun 2020 18:46:29 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT003.mail.protection.outlook.com (10.152.18.108) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3066.18 via Frontend Transport; Mon, 8 Jun 2020 18:46:29 +0000
Received: ("Tessian outbound 8bb15bb571b3:v59");
 Mon, 08 Jun 2020 18:46:29 +0000
X-CR-MTA-TID: 64aa7808
Received: from 0c92406efc31.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 099C4F95-211C-4D02-8BEA-78DC41495E52.1; 
 Mon, 08 Jun 2020 18:46:24 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0c92406efc31.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 08 Jun 2020 18:46:24 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LyZspF0oAHElvyz7+k3FlVVkie0KTPXXc4AcMrUJdvKmeBanFcsaEClrj+sPNZPjctLHXRZ1yTsndS/u2F4PmGjYGmK+x+11Mb/UU7wAUCOJUHoqZF+SjSDsUI7FCnh6IkwqcTqQ0RtQUNorsPRj2hgSJodoVOvoyajqO0OfSoZ0jrB7tApj85M14/6IskTqZSWuLQ+vqYqHrowc0GlS/TTIhOU79zSgdI1qzbpejtgJ64uTs6hrdbREqAEMI/lUCJgM8J1ZjFBpR9eXNXUnmgeUxsDWf2msgLno+KfbDNsk6Xo4TF0oS8kXyz16igTht+xY0QosBh/68ZqB/aENBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=715POScN45n8u2rYCprp/+k8bbSu0wCu3DOPx7AKAt4=;
 b=Gbd3SCz8i/FyAnyeFLGrfC+LRbOCsRiLmXZ3utdaL3PT8MQkr7t4+277HMvYP672c2eWC1ike+UmhNVk+GFFQj8pR6yFs0cjydJFl0TeNiGhRxfZ31fqNiTrE1dpPq+IxkGeBAW72FFi77RhTmFmkAf/yLS0b8ie+5wuFEaTjqjal8WvT3V7hvYEPGTCvKgnBPHu7nR9felOFNix61DwdyMrjVvZLrxRIgLprEQ/xhNs4fUgedowfltch1nuKAjV/K8LnKaWTtuvDvgN+A1zAX4yyiDQuOMRsVXcpY7143c6I5wkFBY3kHTdbKBtdJTYY9Mwj5B04saVGR9zYVUDLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=715POScN45n8u2rYCprp/+k8bbSu0wCu3DOPx7AKAt4=;
 b=GVhWa7heU26WGC1zvKnj9ZY66q69i00Ich81txTo8hYD5Fsrp2NYgFy+mszlnqkyHoJIFZswwu+AGC5RCvef/6po9WuWlQYHg2TdohFk5tZJi/BSYQv8EDkADtJP0snIozDfj9VIKPISD9ZzsXQXWCSn9kEFvbnMRzKrgFJtcH8=
Received: from DB6PR0802MB2216.eurprd08.prod.outlook.com (2603:10a6:4:85::9)
 by DB6PR0802MB2182.eurprd08.prod.outlook.com (2603:10a6:4:86::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.22; Mon, 8 Jun
 2020 18:46:21 +0000
Received: from DB6PR0802MB2216.eurprd08.prod.outlook.com
 ([fe80::1128:b7e7:e832:310f]) by DB6PR0802MB2216.eurprd08.prod.outlook.com
 ([fe80::1128:b7e7:e832:310f%9]) with mapi id 15.20.3066.023; Mon, 8 Jun 2020
 18:46:21 +0000
From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
To: Ruifeng Wang <Ruifeng.Wang@arm.com>, Bruce Richardson
 <bruce.richardson@intel.com>, Vladimir Medvedkin
 <vladimir.medvedkin@intel.com>, John McNamara <john.mcnamara@intel.com>,
 Marko Kovacevic <marko.kovacevic@intel.com>, Ray Kinsella <mdr@ashroe.eu>,
 Neil Horman <nhorman@tuxdriver.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "konstantin.ananyev@intel.com"
 <konstantin.ananyev@intel.com>, nd <nd@arm.com>, Ruifeng Wang
 <Ruifeng.Wang@arm.com>, Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
 nd <nd@arm.com>
Thread-Topic: [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
Thread-Index: AQHWPVQublIHptMu3U2JLRRkuL6pvKjPCf5g
Date: Mon, 8 Jun 2020 18:46:21 +0000
Message-ID: <DB6PR0802MB2216955A768538FC0F45436598850@DB6PR0802MB2216.eurprd08.prod.outlook.com>
References: <20190906094534.36060-1-ruifeng.wang@arm.com>
 <20200608051658.144417-1-ruifeng.wang@arm.com>
 <20200608051658.144417-2-ruifeng.wang@arm.com>
In-Reply-To: <20200608051658.144417-2-ruifeng.wang@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ts-tracking-id: e0b15993-ce48-4cba-a7c5-3968ec5f3c79.0
x-checkrecipientchecked: true
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [70.113.25.165]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 98f46b70-95f8-4d42-50af-08d80bdc41ae
x-ms-traffictypediagnostic: DB6PR0802MB2182:|AM6PR08MB2982:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB2982E7BF5D1EB6D84B3D1AE098850@AM6PR08MB2982.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 042857DBB5
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: HMZb46E29dNgwlPkjoc6PMukbx2Y1sB3uK8qaK7NH0PE/WgJHWayrGFbteSovrToZW/bYtIUo8/mO7GO2VmFBpZCkhFjvx/JgYLzkXwdAgm3UH88sMi/q2pIU0Xop2uKKPIa4Y+wS++BHw6eTuSWNyf3GKPut6pLv6g+vCjxcdvRkOI3bjiVyWhtsfSIaBhVVzo2aMz+sVTNTQXAMFpNH0qw2+ax+kA0Bu7oMQ0+ziCLFPABArZXGJyBrDNjiexeRY/2DJzLtbv0v/hoXsWCIRpqCPS8HjVabkosZMUk0h1SuOrH1oBxrSTht+yq5w4BxabqPaoFxw09jcnt3ppDFrhkQAodrw7Iy4BzrFr4RBKfR9EZpgGMWY49ZlIQXpxH
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB6PR0802MB2216.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(136003)(346002)(366004)(39860400002)(110136005)(4326008)(66556008)(54906003)(316002)(76116006)(66446008)(66946007)(64756008)(66476007)(186003)(26005)(33656002)(6506007)(71200400001)(83380400001)(5660300002)(7696005)(52536014)(2906002)(30864003)(55016002)(8676002)(9686003)(478600001)(86362001)(8936002)(21314003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: LmZ6KtxuAt1XBWvtBUr0O/EhuqZPceupuplU1ve6gDvZ7TsujCDb57RASCKArx0C1wwKQ5KY5gdjf8WJqi2oCtxJrV7OeUkLUXVbNgxTdqOMRubrZ16z0bL3hAqQ/+AeQyb38dd5GuaYbpfVsGHXrI7dSvgMEoWqWu0HU6aMyIUQaAQGgBrtZmeA3vm8bhBxd7VFo9CLH4v6x5nfJOH7Kf5CUXfwafb8oyF0e9Y/aRhsQfxK2kP0cUEWg+QBb+xSbndywuhFClu0SyRpcJusm5BrJgGcjuKwiD8YuaoOH2w1tmRVRGnV+N9uZvmHx+xCBJLXgyQDScqJutwNOEn+acfIe3RHKG5Seg5YifPhTggoAk+LwQVmOsJSj9G0q64rj0tx78zTaUyIQ6H0t2XjJoYr9/n4EGeWgHzefx+wEabFcqfTjE7tRU1ojmPUgxg8pXpQTW+3aYqy6Fgn7me9vEvYfr/Rzhmyj83ovHk+2DSVD+2Qh0GV8gsoGY7OrGfl
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2182
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966005)(110136005)(54906003)(316002)(82740400003)(6506007)(36906005)(26005)(33656002)(356005)(86362001)(7696005)(2906002)(186003)(4326008)(47076004)(83380400001)(336012)(8676002)(30864003)(55016002)(5660300002)(478600001)(8936002)(9686003)(82310400002)(81166007)(52536014)(70586007)(70206006)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 65cc3904-1346-4106-10d7-08d80bdc3ce6
X-Forefront-PRVS: 042857DBB5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1iNp3F+Iiya/d6v3UnenoQ64LCabpW9azEO5ff3sjlxgM235VOzhPs/GnDEfEqOO72n1j9GcysKRZYqvlldIZxS0QWjmzII524OJ6EFNBRyoTFECLZII1y4UaEOrZ0Pz72eF02VwTnuXIbkzGIPz2eCil09sgklfzX7k+rNDW77WXUnrm3xyUcLMwqXJZ9GWDWBi4Y2H9pswmV+Aqp7iYeHIeK0Wk7NHhrODir2mhF0JfWrvUiZoJyLBVeGlQ0UW7bJEpMoeaNM8G7Tohhpd1t92jxyONHy+NzfOgmHwTwVj7b3Q2yFMFhiRdEsltXNsPq8S963s+Po7nGqgu926guCYmoXKq8AwxAJu9Sx9CIeNLPQGPvTCwU+K7vpPxZzR8/jXRiGG0kaJaJYCcmrf2KP8qqyLOXdwQyOvTU7G0Cg=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2020 18:46:29.4236 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 98f46b70-95f8-4d42-50af-08d80bdc41ae
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB2982
Subject: Re: [dpdk-dev] [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

<snip>

> Subject: [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
>=20
> Currently, the tbl8 group is freed even though the readers might be using=
 the
> tbl8 group entries. The freed tbl8 group can be reallocated quickly. This
> results in incorrect lookup results.
>=20
> RCU QSBR process is integrated for safe tbl8 group reclaim.
> Refer to RCU documentation to understand various aspects of integrating
> RCU library into other libraries.
>=20
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> ---
>  doc/guides/prog_guide/lpm_lib.rst  |  32 ++++++++
>  lib/librte_lpm/Makefile            |   2 +-
>  lib/librte_lpm/meson.build         |   1 +
>  lib/librte_lpm/rte_lpm.c           | 123 ++++++++++++++++++++++++++---
>  lib/librte_lpm/rte_lpm.h           |  59 ++++++++++++++
>  lib/librte_lpm/rte_lpm_version.map |   6 ++
>  6 files changed, 211 insertions(+), 12 deletions(-)
>=20
> diff --git a/doc/guides/prog_guide/lpm_lib.rst
> b/doc/guides/prog_guide/lpm_lib.rst
> index 1609a57d0..7cc99044a 100644
> --- a/doc/guides/prog_guide/lpm_lib.rst
> +++ b/doc/guides/prog_guide/lpm_lib.rst
> @@ -145,6 +145,38 @@ depending on whether we need to move to the next
> table or not.
>  Prefix expansion is one of the keys of this algorithm,  since it improve=
s the
> speed dramatically by adding redundancy.
>=20
> +Deletion
> +~~~~~~~~
> +
> +When deleting a rule, a replacement rule is searched for. Replacement
> +rule is an existing rule that has the longest prefix match with the rule=
 to be
> deleted, but has smaller depth.
> +
> +If a replacement rule is found, target tbl24 and tbl8 entries are
> +updated to have the same depth and next hop value with the replacement
> rule.
> +
> +If no replacement rule can be found, target tbl24 and tbl8 entries will =
be
> cleared.
> +
> +Prefix expansion is performed if the rule's depth is not exactly 24 bits=
 or 32
> bits.
> +
> +After deleting a rule, a group of tbl8s that belongs to the same tbl24 e=
ntry
> are freed in following cases:
> +
> +*   All tbl8s in the group are empty .
> +
> +*   All tbl8s in the group have the same values and with depth no greate=
r
> than 24.
> +
> +Free of tbl8s have different behaviors:
> +
> +*   If RCU is not used, tbl8s are cleared and reclaimed immediately.
> +
> +*   If RCU is used, tbl8s are reclaimed when readers are in quiescent st=
ate.
> +
> +When the LPM is not using RCU, tbl8 group can be freed immediately even
> +though the readers might be using the tbl8 group entries. This might res=
ult
> in incorrect lookup results.
> +
> +RCU QSBR process is integrated for safe tbl8 group reclaimation.
> +Application has certain responsibilities while using this feature.
> +Please refer to resource reclaimation framework of :ref:`RCU library
> <RCU_Library>` for more details.
> +
>  Lookup
>  ~~~~~~
>=20
> diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile index
> d682785b6..6f06c5c03 100644
> --- a/lib/librte_lpm/Makefile
> +++ b/lib/librte_lpm/Makefile
> @@ -8,7 +8,7 @@ LIB =3D librte_lpm.a
>=20
>  CFLAGS +=3D -O3
>  CFLAGS +=3D $(WERROR_FLAGS) -I$(SRCDIR)
> -LDLIBS +=3D -lrte_eal -lrte_hash
> +LDLIBS +=3D -lrte_eal -lrte_hash -lrte_rcu
>=20
>  EXPORT_MAP :=3D rte_lpm_version.map
>=20
> diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build inde=
x
> 021ac6d8d..6cfc083c5 100644
> --- a/lib/librte_lpm/meson.build
> +++ b/lib/librte_lpm/meson.build
> @@ -7,3 +7,4 @@ headers =3D files('rte_lpm.h', 'rte_lpm6.h')  # without
> worrying about which architecture we actually need  headers +=3D
> files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h')  deps +=3D =
['hash']
> +deps +=3D ['rcu']
> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> 38ab512a4..30f541179 100644
> --- a/lib/librte_lpm/rte_lpm.c
> +++ b/lib/librte_lpm/rte_lpm.c
> @@ -1,5 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2010-2014 Intel Corporation
> + * Copyright(c) 2020 Arm Limited
>   */
>=20
>  #include <string.h>
> @@ -246,12 +247,85 @@ rte_lpm_free(struct rte_lpm *lpm)
>=20
>  	rte_mcfg_tailq_write_unlock();
>=20
> +	if (lpm->dq)
> +		rte_rcu_qsbr_dq_delete(lpm->dq);
>  	rte_free(lpm->tbl8);
>  	rte_free(lpm->rules_tbl);
>  	rte_free(lpm);
>  	rte_free(te);
>  }
>=20
> +static void
> +__lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n) {
> +	struct rte_lpm_tbl_entry zero_tbl8_entry =3D {0};
> +	uint32_t tbl8_group_index =3D *(uint32_t *)data;
> +	struct rte_lpm_tbl_entry *tbl8 =3D (struct rte_lpm_tbl_entry *)p;
> +
> +	RTE_SET_USED(n);
> +	/* Set tbl8 group invalid */
> +	__atomic_store(&tbl8[tbl8_group_index], &zero_tbl8_entry,
> +		__ATOMIC_RELAXED);
> +}
> +
> +/* Associate QSBR variable with an LPM object.
> + */
> +int
> +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg=
,
> +	struct rte_rcu_qsbr_dq **dq)
I prefer not to return the defer queue to the user here. I see 3 different =
ways how RCU can be integrated in the libraries:

1) The sync mode in which the defer queue is not created. The rte_rcu_qsbr_=
synchronize API is called after delete. The resource is freed after rte_rcu=
_qsbr_synchronize returns and the control is given back to the user.

2) The mode where the defer queue is created. There is a lot of flexibility=
 provided now as the defer queue size, reclaim threshold and how many resou=
rces to reclaim are all configurable. IMO, this solves most of the use case=
s and helps the application integrate lock-less algorithms with minimal eff=
ort.

3) This is where the application has its own method of reclamation that doe=
s not fall under 1) or 2). To address this use case, I think we should make=
 changes to the LPM library. Today, in LPM, the delete and free are combine=
d into a single API. We can split this single API into 2 separate APIs - de=
lete and free (similar thing was done to rte_hash library) without affectin=
g the ABI. This should provide all the flexibility required for the applica=
tion to implement any kind of reclamation algorithm it wants. Returning the=
 defer queue to the user in the above API does not solve this use case.

> +{
> +	char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
> +	struct rte_rcu_qsbr_dq_parameters params =3D {0};
> +
> +	if ((lpm =3D=3D NULL) || (cfg =3D=3D NULL)) {
> +		rte_errno =3D EINVAL;
> +		return 1;
> +	}
> +
> +	if (lpm->v) {
> +		rte_errno =3D EEXIST;
> +		return 1;
> +	}
> +
> +	if (cfg->mode =3D=3D RTE_LPM_QSBR_MODE_SYNC) {
> +		/* No other things to do. */
> +	} else if (cfg->mode =3D=3D RTE_LPM_QSBR_MODE_DQ) {
> +		/* Init QSBR defer queue. */
> +		snprintf(rcu_dq_name, sizeof(rcu_dq_name),
> +				"LPM_RCU_%s", lpm->name);
> +		params.name =3D rcu_dq_name;
> +		params.size =3D cfg->dq_size;
> +		if (params.size =3D=3D 0)
> +			params.size =3D lpm->number_tbl8s;
> +		params.trigger_reclaim_limit =3D cfg->reclaim_thd;
> +		if (params.trigger_reclaim_limit =3D=3D 0)
> +			params.trigger_reclaim_limit =3D
> +					RTE_LPM_RCU_DQ_RECLAIM_THD;
> +		params.max_reclaim_size =3D cfg->reclaim_max;
> +		if (params.max_reclaim_size =3D=3D 0)
> +			params.max_reclaim_size =3D
> RTE_LPM_RCU_DQ_RECLAIM_MAX;
> +		params.esize =3D sizeof(uint32_t);	/* tbl8 group index */
> +		params.free_fn =3D __lpm_rcu_qsbr_free_resource;
> +		params.p =3D lpm->tbl8;
> +		params.v =3D cfg->v;
> +		lpm->dq =3D rte_rcu_qsbr_dq_create(&params);
> +		if (lpm->dq =3D=3D NULL) {
> +			RTE_LOG(ERR, LPM,
> +					"LPM QS defer queue creation
> failed\n");
> +			return 1;
> +		}
> +		if (dq)
> +			*dq =3D lpm->dq;
> +	} else {
> +		rte_errno =3D EINVAL;
> +		return 1;
> +	}
> +	lpm->rcu_mode =3D cfg->mode;
> +	lpm->v =3D cfg->v;
> +
> +	return 0;
> +}
> +
>  /*
>   * Adds a rule to the rule table.
>   *
> @@ -394,14 +468,15 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked,
> uint8_t depth)
>   * Find, clean and allocate a tbl8.
>   */
>  static int32_t
> -tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
> +_tbl8_alloc(struct rte_lpm *lpm)
>  {
>  	uint32_t group_idx; /* tbl8 group index. */
>  	struct rte_lpm_tbl_entry *tbl8_entry;
>=20
>  	/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> -	for (group_idx =3D 0; group_idx < number_tbl8s; group_idx++) {
> -		tbl8_entry =3D &tbl8[group_idx *
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> +	for (group_idx =3D 0; group_idx < lpm->number_tbl8s; group_idx++) {
> +		tbl8_entry =3D &lpm->tbl8[group_idx *
> +
> 	RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
>  		/* If a free tbl8 group is found clean it and set as VALID. */
>  		if (!tbl8_entry->valid_group) {
>  			struct rte_lpm_tbl_entry new_tbl8_entry =3D { @@ -
> 427,14 +502,40 @@ tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t
> number_tbl8s)
>  	return -ENOSPC;
>  }
>=20
> +static int32_t
> +tbl8_alloc(struct rte_lpm *lpm)
> +{
> +	int32_t group_idx; /* tbl8 group index. */
> +
> +	group_idx =3D _tbl8_alloc(lpm);
> +	if ((group_idx < 0) && (lpm->dq !=3D NULL)) {
> +		/* If there are no tbl8 groups try to reclaim one. */
> +		if (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL)
> =3D=3D 0)
> +			group_idx =3D _tbl8_alloc(lpm);
> +	}
> +
> +	return group_idx;
> +}
> +
>  static void
> -tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> +tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
>  {
> -	/* Set tbl8 group invalid*/
>  	struct rte_lpm_tbl_entry zero_tbl8_entry =3D {0};
>=20
> -	__atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
> -			__ATOMIC_RELAXED);
> +	if (!lpm->v) {
> +		/* Set tbl8 group invalid*/
> +		__atomic_store(&lpm->tbl8[tbl8_group_start],
> &zero_tbl8_entry,
> +				__ATOMIC_RELAXED);
> +	} else if (lpm->rcu_mode =3D=3D RTE_LPM_QSBR_MODE_SYNC) {
> +		/* Wait for quiescent state change. */
> +		rte_rcu_qsbr_synchronize(lpm->v,
> RTE_QSBR_THRID_INVALID);
> +		/* Set tbl8 group invalid*/
> +		__atomic_store(&lpm->tbl8[tbl8_group_start],
> &zero_tbl8_entry,
> +				__ATOMIC_RELAXED);
> +	} else if (lpm->rcu_mode =3D=3D RTE_LPM_QSBR_MODE_DQ) {
> +		/* Push into QSBR defer queue. */
> +		rte_rcu_qsbr_dq_enqueue(lpm->dq, (void
> *)&tbl8_group_start);
> +	}
>  }
>=20
>  static __rte_noinline int32_t
> @@ -523,7 +624,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked, uint8_t depth,
>=20
>  	if (!lpm->tbl24[tbl24_index].valid) {
>  		/* Search for a free tbl8 group. */
> -		tbl8_group_index =3D tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
> +		tbl8_group_index =3D tbl8_alloc(lpm);
>=20
>  		/* Check tbl8 allocation was successful. */
>  		if (tbl8_group_index < 0) {
> @@ -569,7 +670,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked, uint8_t depth,
>  	} /* If valid entry but not extended calculate the index into Table8. *=
/
>  	else if (lpm->tbl24[tbl24_index].valid_group =3D=3D 0) {
>  		/* Search for free tbl8 group. */
> -		tbl8_group_index =3D tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
> +		tbl8_group_index =3D tbl8_alloc(lpm);
>=20
>  		if (tbl8_group_index < 0) {
>  			return tbl8_group_index;
> @@ -977,7 +1078,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
>  		 */
>  		lpm->tbl24[tbl24_index].valid =3D 0;
>  		__atomic_thread_fence(__ATOMIC_RELEASE);
> -		tbl8_free(lpm->tbl8, tbl8_group_start);
> +		tbl8_free(lpm, tbl8_group_start);
>  	} else if (tbl8_recycle_index > -1) {
>  		/* Update tbl24 entry. */
>  		struct rte_lpm_tbl_entry new_tbl24_entry =3D { @@ -993,7
> +1094,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
>  		__atomic_store(&lpm->tbl24[tbl24_index],
> &new_tbl24_entry,
>  				__ATOMIC_RELAXED);
>  		__atomic_thread_fence(__ATOMIC_RELEASE);
> -		tbl8_free(lpm->tbl8, tbl8_group_start);
> +		tbl8_free(lpm, tbl8_group_start);
>  	}
>  #undef group_idx
>  	return 0;
> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
> b9d49ac87..8c054509a 100644
> --- a/lib/librte_lpm/rte_lpm.h
> +++ b/lib/librte_lpm/rte_lpm.h
> @@ -1,5 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2010-2014 Intel Corporation
> + * Copyright(c) 2020 Arm Limited
>   */
>=20
>  #ifndef _RTE_LPM_H_
> @@ -20,6 +21,7 @@
>  #include <rte_memory.h>
>  #include <rte_common.h>
>  #include <rte_vect.h>
> +#include <rte_rcu_qsbr.h>
>=20
>  #ifdef __cplusplus
>  extern "C" {
> @@ -62,6 +64,17 @@ extern "C" {
>  /** Bitmask used to indicate successful lookup */
>  #define RTE_LPM_LOOKUP_SUCCESS          0x01000000
>=20
> +/** @internal Default threshold to trigger RCU defer queue reclaimation.=
 */
> +#define RTE_LPM_RCU_DQ_RECLAIM_THD	32
> +
> +/** @internal Default RCU defer queue entries to reclaim in one go. */
> +#define RTE_LPM_RCU_DQ_RECLAIM_MAX	16
> +
> +/* Create defer queue for reclaim. */
> +#define RTE_LPM_QSBR_MODE_DQ		0
> +/* Use blocking mode reclaim. No defer queue created. */
> +#define RTE_LPM_QSBR_MODE_SYNC		0x01
> +
>  #if RTE_BYTE_ORDER =3D=3D RTE_LITTLE_ENDIAN
>  /** @internal Tbl24 entry structure. */  __extension__ @@ -130,6 +143,28
> @@ struct rte_lpm {
>  			__rte_cache_aligned; /**< LPM tbl24 table. */
>  	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>  	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> +
> +	/* RCU config. */
> +	struct rte_rcu_qsbr *v;		/* RCU QSBR variable. */
> +	uint32_t rcu_mode;		/* Blocking, defer queue. */
> +	struct rte_rcu_qsbr_dq *dq;	/* RCU QSBR defer queue. */
> +};
> +
> +/** LPM RCU QSBR configuration structure. */ struct rte_lpm_rcu_config
> +{
> +	struct rte_rcu_qsbr *v;	/* RCU QSBR variable. */
> +	/* Mode of RCU QSBR. RTE_LPM_QSBR_MODE_xxx
> +	 * '0' for default: create defer queue for reclaim.
> +	 */
> +	uint32_t mode;
> +	/* RCU defer queue size. default: lpm->number_tbl8s. */
> +	uint32_t dq_size;
> +	uint32_t reclaim_thd;	/* Threshold to trigger auto reclaim.
> +				 * default:
> RTE_LPM_RCU_DQ_RECLAIM_TRHD.
> +				 */
> +	uint32_t reclaim_max;	/* Max entries to reclaim in one go.
> +				 * default:
> RTE_LPM_RCU_DQ_RECLAIM_MAX.
> +				 */
>  };
>=20
>  /**
> @@ -179,6 +214,30 @@ rte_lpm_find_existing(const char *name);  void
> rte_lpm_free(struct rte_lpm *lpm);
>=20
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Associate RCU QSBR variable with an LPM object.
> + *
> + * @param lpm
> + *   the lpm object to add RCU QSBR
> + * @param cfg
> + *   RCU QSBR configuration
> + * @param dq
> + *   handler of created RCU QSBR defer queue
> + * @return
> + *   On success - 0
> + *   On error - 1 with error code set in rte_errno.
> + *   Possible rte_errno codes are:
> + *   - EINVAL - invalid pointer
> + *   - EEXIST - already added QSBR
> + *   - ENOMEM - memory allocation failure
> + */
> +__rte_experimental
> +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config
> *cfg,
> +	struct rte_rcu_qsbr_dq **dq);
> +
>  /**
>   * Add a rule to the LPM table.
>   *
> diff --git a/lib/librte_lpm/rte_lpm_version.map
> b/lib/librte_lpm/rte_lpm_version.map
> index 500f58b80..bfccd7eac 100644
> --- a/lib/librte_lpm/rte_lpm_version.map
> +++ b/lib/librte_lpm/rte_lpm_version.map
> @@ -21,3 +21,9 @@ DPDK_20.0 {
>=20
>  	local: *;
>  };
> +
> +EXPERIMENTAL {
> +	global:
> +
> +	rte_lpm_rcu_qsbr_add;
> +};
> --
> 2.17.1