From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80FCFA0350; Tue, 30 Jun 2020 12:34:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4317E1BE8E; Tue, 30 Jun 2020 12:34:10 +0200 (CEST) Received: from qrelay228.mxroute.com (qrelay228.mxroute.com [172.82.139.228]) by dpdk.org (Postfix) with ESMTP id 5ACA52B9C for ; Tue, 30 Jun 2020 12:34:08 +0200 (CEST) Received: from filter003.mxroute.com ([168.235.111.26] 168-235-111-26.cloud.ramnode.com) (Authenticated sender: mN4UYu2MZsgR) by qrelay228.mxroute.com (ZoneMTA) with ESMTPA id 17304ca717100027dd.004 for ; Tue, 30 Jun 2020 10:34:02 +0000 X-Zone-Loop: 0b9dd523cfb9db2bff00e4b8b8816a9d304fffe3b14d X-Originating-IP: [168.235.111.26] Received: from echo.mxrouting.net (echo.mxrouting.net [116.202.222.109]) by filter003.mxroute.com (Postfix) with ESMTPS id 3FB4760013; Tue, 30 Jun 2020 10:34:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ashroe.eu; s=x; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:MIME-Version:Date: Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=UtojXOf6ssc6vjov1Pze3AifU56pfDzJ7lVhQVbFC4c=; b=IH3x6/eaIkSssLAct1fTWo4Zn+ Rl0EXNZFXe1FT9lhXjfCnr8ce/uuw/h7cSsVDVXzoD0m9KZ+eAk/hxbLbFpoc8n7vnF2gT56c1Fqw 0fre7DZv/97MeJej4XxfgCWzC/46uTLpTdVfcIjO004pQxDMlOIVG6reNlBXbyjyoGB1EvEsVjbLX 0N2N8M+tABDBmGMPPI5OJrVcca+hv/Pmix/l482BOzYh4tVaLdc0xxU4lq1RT36XbHtg+afTziu4o l/QpawOcjb6ZcohelBSP6byv4aHRau5orbf8274A6zDB6/PJgwZ/YOpe/7+AehEnwnCN6L6mKRfcH 3oqXABbw==; To: Ruifeng Wang , Bruce Richardson , Vladimir Medvedkin , John McNamara , Marko Kovacevic , Neil Horman Cc: dev@dpdk.org, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com, nd@arm.com References: <20190906094534.36060-1-ruifeng.wang@arm.com> <20200629080301.97515-1-ruifeng.wang@arm.com> <20200629080301.97515-2-ruifeng.wang@arm.com> From: "Kinsella, Ray" Autocrypt: addr=mdr@ashroe.eu; keydata= mQINBFv8B3wBEAC+5ImcgbIvadt3axrTnt7Sxch3FsmWTTomXfB8YiuHT8KL8L/bFRQSL1f6 ASCHu3M89EjYazlY+vJUWLr0BhK5t/YI7bQzrOuYrl9K94vlLwzD19s/zB/g5YGGR5plJr0s JtJsFGEvF9LL3e+FKMRXveQxBB8A51nAHfwG0WSyx53d61DYz7lp4/Y4RagxaJoHp9lakn8j HV2N6rrnF+qt5ukj5SbbKWSzGg5HQF2t0QQ5tzWhCAKTfcPlnP0GymTBfNMGOReWivi3Qqzr S51Xo7hoGujUgNAM41sxpxmhx8xSwcQ5WzmxgAhJ/StNV9cb3HWIoE5StCwQ4uXOLplZNGnS uxNdegvKB95NHZjRVRChg/uMTGpg9PqYbTIFoPXjuk27sxZLRJRrueg4tLbb3HM39CJwSB++ YICcqf2N+GVD48STfcIlpp12/HI+EcDSThzfWFhaHDC0hyirHxJyHXjnZ8bUexI/5zATn/ux TpMbc/vicJxeN+qfaVqPkCbkS71cHKuPluM3jE8aNCIBNQY1/j87k5ELzg3qaesLo2n1krBH bKvFfAmQuUuJT84/IqfdVtrSCTabvDuNBDpYBV0dGbTwaRfE7i+LiJJclUr8lOvHUpJ4Y6a5 0cxEPxm498G12Z3NoY/mP5soItPIPtLR0rA0fage44zSPwp6cQARAQABtBxSYXkgS2luc2Vs bGEgPG1kckBhc2hyb2UuZXU+iQJUBBMBCAA+FiEEcDUDlKDJaDuJlfZfdJdaH/sCCpsFAlv8 B3wCGyMFCQlmAYAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQdJdaH/sCCptdtRAAl0oE msa+djBVYLIsax+0f8acidtWg2l9f7kc2hEjp9h9aZCpPchQvhhemtew/nKavik3RSnLTAyn B3C/0GNlmvI1l5PFROOgPZwz4xhJKGN7jOsRrbkJa23a8ly5UXwF3Vqnlny7D3z+7cu1qq/f VRK8qFyWkAb+xgqeZ/hTcbJUWtW+l5Zb+68WGEp8hB7TuJLEWb4+VKgHTpQ4vElYj8H3Z94a 04s2PJMbLIZSgmKDASnyrKY0CzTpPXx5rSJ1q+B1FCsfepHLqt3vKSALa3ld6bJ8fSJtDUJ7 JLiU8dFZrywgDIVme01jPbjJtUScW6jONLvhI8Z2sheR71UoKqGomMHNQpZ03ViVWBEALzEt TcjWgJFn8yAmxqM4nBnZ+hE3LbMo34KCHJD4eg18ojDt3s9VrDLa+V9fNxUHPSib9FD9UX/1 +nGfU/ZABmiTuUDM7WZdXri7HaMpzDRJUKI6b+/uunF8xH/h/MHW16VuMzgI5dkOKKv1LejD dT5mA4R+2zBS+GsM0oa2hUeX9E5WwjaDzXtVDg6kYq8YvEd+m0z3M4e6diFeLS77/sAOgaYL 92UcoKD+Beym/fVuC6/55a0e12ksTmgk5/ZoEdoNQLlVgd2INtvnO+0k5BJcn66ZjKn3GbEC VqFbrnv1GnA58nEInRCTzR1k26h9nmS5Ag0EW/wHfAEQAMth1vHr3fOZkVOPfod3M6DkQir5 xJvUW5EHgYUjYCPIa2qzgIVVuLDqZgSCCinyooG5dUJONVHj3nCbITCpJp4eB3PI84RPfDcC hf/V34N/Gx5mTeoymSZDBmXT8YtvV/uJvn+LvHLO4ZJdvq5ZxmDyxfXFmkm3/lLw0+rrNdK5 pt6OnVlCqEU9tcDBezjUwDtOahyV20XqxtUttN4kQWbDRkhT+HrA9WN9l2HX91yEYC+zmF1S OhBqRoTPLrR6g4sCWgFywqztpvZWhyIicJipnjac7qL/wRS+wrWfsYy6qWLIV80beN7yoa6v ccnuy4pu2uiuhk9/edtlmFE4dNdoRf7843CV9k1yRASTlmPkU59n0TJbw+okTa9fbbQgbIb1 pWsAuicRHyLUIUz4f6kPgdgty2FgTKuPuIzJd1s8s6p2aC1qo+Obm2gnBTduB+/n1Jw+vKpt 07d+CKEKu4CWwvZZ8ktJJLeofi4hMupTYiq+oMzqH+V1k6QgNm0Da489gXllU+3EFC6W1qKj tkvQzg2rYoWeYD1Qn8iXcO4Fpk6wzylclvatBMddVlQ6qrYeTmSbCsk+m2KVrz5vIyja0o5Y yfeN29s9emXnikmNfv/dA5fpi8XCANNnz3zOfA93DOB9DBf0TQ2/OrSPGjB3op7RCfoPBZ7u AjJ9dM7VABEBAAGJAjwEGAEIACYWIQRwNQOUoMloO4mV9l90l1of+wIKmwUCW/wHfAIbDAUJ CWYBgAAKCRB0l1of+wIKm3KlD/9w/LOG5rtgtCUWPl4B3pZvGpNym6XdK8cop9saOnE85zWf u+sKWCrxNgYkYP7aZrYMPwqDvilxhbTsIJl5HhPgpTO1b0i+c0n1Tij3EElj5UCg3q8mEc17 c+5jRrY3oz77g7E3oPftAjaq1ybbXjY4K32o3JHFR6I8wX3m9wJZJe1+Y+UVrrjY65gZFxcA thNVnWKErarVQGjeNgHV4N1uF3pIx3kT1N4GSnxhoz4Bki91kvkbBhUgYfNflGURfZT3wIKK +d50jd7kqRouXUCzTdzmDh7jnYrcEFM4nvyaYu0JjSS5R672d9SK5LVIfWmoUGzqD4AVmUW8 pcv461+PXchuS8+zpltR9zajl72Q3ymlT4BTAQOlCWkD0snBoKNUB5d2EXPNV13nA0qlm4U2 GpROfJMQXjV6fyYRvttKYfM5xYKgRgtP0z5lTAbsjg9WFKq0Fndh7kUlmHjuAIwKIV4Tzo75 QO2zC0/NTaTjmrtiXhP+vkC4pcrOGNsbHuaqvsc/ZZ0siXyYsqbctj/sCd8ka2r94u+c7o4l BGaAm+FtwAfEAkXHu4y5Phuv2IRR+x1wTey1U1RaEPgN8xq0LQ1OitX4t2mQwjdPihZQBCnZ wzOrkbzlJMNrMKJpEgulmxAHmYJKgvZHXZXtLJSejFjR0GdHJcL5rwVOMWB8cg== Message-ID: Date: Tue, 30 Jun 2020 11:33:59 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 In-Reply-To: <20200629080301.97515-2-ruifeng.wang@arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-AuthUser: mdr@ashroe.eu Subject: Re: [dpdk-dev] [PATCH v5 1/3] lib/lpm: integrate RCU QSBR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 29/06/2020 09:02, Ruifeng Wang wrote: > Currently, the tbl8 group is freed even though the readers might be > using the tbl8 group entries. The freed tbl8 group can be reallocated > quickly. This results in incorrect lookup results. > > RCU QSBR process is integrated for safe tbl8 group reclaim. > Refer to RCU documentation to understand various aspects of > integrating RCU library into other libraries. > > Signed-off-by: Ruifeng Wang > Reviewed-by: Honnappa Nagarahalli > --- > doc/guides/prog_guide/lpm_lib.rst | 32 +++++++ > lib/librte_lpm/Makefile | 2 +- > lib/librte_lpm/meson.build | 1 + > lib/librte_lpm/rte_lpm.c | 129 ++++++++++++++++++++++++++--- > lib/librte_lpm/rte_lpm.h | 59 +++++++++++++ > lib/librte_lpm/rte_lpm_version.map | 6 ++ > 6 files changed, 216 insertions(+), 13 deletions(-) > > diff --git a/doc/guides/prog_guide/lpm_lib.rst b/doc/guides/prog_guide/lpm_lib.rst > index 1609a57d0..7cc99044a 100644 > --- a/doc/guides/prog_guide/lpm_lib.rst > +++ b/doc/guides/prog_guide/lpm_lib.rst > @@ -145,6 +145,38 @@ depending on whether we need to move to the next table or not. > Prefix expansion is one of the keys of this algorithm, > since it improves the speed dramatically by adding redundancy. > > +Deletion > +~~~~~~~~ > + > +When deleting a rule, a replacement rule is searched for. Replacement rule is an existing rule that has > +the longest prefix match with the rule to be deleted, but has smaller depth. > + > +If a replacement rule is found, target tbl24 and tbl8 entries are updated to have the same depth and next hop > +value with the replacement rule. > + > +If no replacement rule can be found, target tbl24 and tbl8 entries will be cleared. > + > +Prefix expansion is performed if the rule's depth is not exactly 24 bits or 32 bits. > + > +After deleting a rule, a group of tbl8s that belongs to the same tbl24 entry are freed in following cases: > + > +* All tbl8s in the group are empty . > + > +* All tbl8s in the group have the same values and with depth no greater than 24. > + > +Free of tbl8s have different behaviors: > + > +* If RCU is not used, tbl8s are cleared and reclaimed immediately. > + > +* If RCU is used, tbl8s are reclaimed when readers are in quiescent state. > + > +When the LPM is not using RCU, tbl8 group can be freed immediately even though the readers might be using > +the tbl8 group entries. This might result in incorrect lookup results. > + > +RCU QSBR process is integrated for safe tbl8 group reclaimation. Application has certain responsibilities > +while using this feature. Please refer to resource reclaimation framework of :ref:`RCU library ` > +for more details. > + > Lookup > ~~~~~~ > > diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile > index d682785b6..6f06c5c03 100644 > --- a/lib/librte_lpm/Makefile > +++ b/lib/librte_lpm/Makefile > @@ -8,7 +8,7 @@ LIB = librte_lpm.a > > CFLAGS += -O3 > CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) > -LDLIBS += -lrte_eal -lrte_hash > +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu > > EXPORT_MAP := rte_lpm_version.map > > diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build > index 021ac6d8d..6cfc083c5 100644 > --- a/lib/librte_lpm/meson.build > +++ b/lib/librte_lpm/meson.build > @@ -7,3 +7,4 @@ headers = files('rte_lpm.h', 'rte_lpm6.h') > # without worrying about which architecture we actually need > headers += files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h') > deps += ['hash'] > +deps += ['rcu'] > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c > index 38ab512a4..41e9c49b8 100644 > --- a/lib/librte_lpm/rte_lpm.c > +++ b/lib/librte_lpm/rte_lpm.c > @@ -1,5 +1,6 @@ > /* SPDX-License-Identifier: BSD-3-Clause > * Copyright(c) 2010-2014 Intel Corporation > + * Copyright(c) 2020 Arm Limited > */ > > #include > @@ -245,13 +246,84 @@ rte_lpm_free(struct rte_lpm *lpm) > TAILQ_REMOVE(lpm_list, te, next); > > rte_mcfg_tailq_write_unlock(); > - > +#ifdef ALLOW_EXPERIMENTAL_API > + if (lpm->dq) > + rte_rcu_qsbr_dq_delete(lpm->dq); > +#endif > rte_free(lpm->tbl8); > rte_free(lpm->rules_tbl); > rte_free(lpm); > rte_free(te); > } > > +static void > +__lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n) > +{ > + struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; > + uint32_t tbl8_group_index = *(uint32_t *)data; > + struct rte_lpm_tbl_entry *tbl8 = ((struct rte_lpm *)p)->tbl8; > + > + RTE_SET_USED(n); > + /* Set tbl8 group invalid */ > + __atomic_store(&tbl8[tbl8_group_index], &zero_tbl8_entry, > + __ATOMIC_RELAXED); > +} > + > +/* Associate QSBR variable with an LPM object. > + */ > +int > +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg, > + struct rte_rcu_qsbr_dq **dq) > +{ > + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE]; > + struct rte_rcu_qsbr_dq_parameters params = {0}; > + > + if ((lpm == NULL) || (cfg == NULL)) { > + rte_errno = EINVAL; > + return 1; > + } > + > + if (lpm->v) { > + rte_errno = EEXIST; > + return 1; > + } > + > + if (cfg->mode == RTE_LPM_QSBR_MODE_SYNC) { > + /* No other things to do. */ > + } else if (cfg->mode == RTE_LPM_QSBR_MODE_DQ) { > + /* Init QSBR defer queue. */ > + snprintf(rcu_dq_name, sizeof(rcu_dq_name), > + "LPM_RCU_%s", lpm->name); > + params.name = rcu_dq_name; > + params.size = cfg->dq_size; > + if (params.size == 0) > + params.size = lpm->number_tbl8s; > + params.trigger_reclaim_limit = cfg->reclaim_thd; > + params.max_reclaim_size = cfg->reclaim_max; > + if (params.max_reclaim_size == 0) > + params.max_reclaim_size = RTE_LPM_RCU_DQ_RECLAIM_MAX; > + params.esize = sizeof(uint32_t); /* tbl8 group index */ > + params.free_fn = __lpm_rcu_qsbr_free_resource; > + params.p = lpm; > + params.v = cfg->v; > + lpm->dq = rte_rcu_qsbr_dq_create(¶ms); > + if (lpm->dq == NULL) { > + RTE_LOG(ERR, LPM, > + "LPM QS defer queue creation failed\n"); > + return 1; > + } > + if (dq) > + *dq = lpm->dq; > + } else { > + rte_errno = EINVAL; > + return 1; > + } > + lpm->rcu_mode = cfg->mode; > + lpm->v = cfg->v; > + > + return 0; > +} > + > /* > * Adds a rule to the rule table. > * > @@ -394,14 +466,15 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth) > * Find, clean and allocate a tbl8. > */ > static int32_t > -tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s) > +_tbl8_alloc(struct rte_lpm *lpm) > { > uint32_t group_idx; /* tbl8 group index. */ > struct rte_lpm_tbl_entry *tbl8_entry; > > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */ > - for (group_idx = 0; group_idx < number_tbl8s; group_idx++) { > - tbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES]; > + for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) { > + tbl8_entry = &lpm->tbl8[group_idx * > + RTE_LPM_TBL8_GROUP_NUM_ENTRIES]; > /* If a free tbl8 group is found clean it and set as VALID. */ > if (!tbl8_entry->valid_group) { > struct rte_lpm_tbl_entry new_tbl8_entry = { > @@ -427,14 +500,46 @@ tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s) > return -ENOSPC; > } > > +static int32_t > +tbl8_alloc(struct rte_lpm *lpm) > +{ > + int32_t group_idx; /* tbl8 group index. */ > + > + group_idx = _tbl8_alloc(lpm); > +#ifdef ALLOW_EXPERIMENTAL_API > + if ((group_idx == -ENOSPC) && (lpm->dq != NULL)) { > + /* If there are no tbl8 groups try to reclaim one. */ > + if (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL) == 0) > + group_idx = _tbl8_alloc(lpm); > + } > +#endif > + return group_idx; > +} > + > static void > -tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start) > +tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) > { > - /* Set tbl8 group invalid*/ > struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; > - > - __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry, > +#ifdef ALLOW_EXPERIMENTAL_API > + if (!lpm->v) { > + /* Set tbl8 group invalid*/ > + __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry, > + __ATOMIC_RELAXED); > + } else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) { > + /* Wait for quiescent state change. */ > + rte_rcu_qsbr_synchronize(lpm->v, RTE_QSBR_THRID_INVALID); > + /* Set tbl8 group invalid*/ > + __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry, > + __ATOMIC_RELAXED); > + } else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) { > + /* Push into QSBR defer queue. */ > + rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&tbl8_group_start); > + } > +#else > + /* Set tbl8 group invalid*/ > + __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry, > __ATOMIC_RELAXED); > +#endif > } > > static __rte_noinline int32_t > @@ -523,7 +628,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, > > if (!lpm->tbl24[tbl24_index].valid) { > /* Search for a free tbl8 group. */ > - tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s); > + tbl8_group_index = tbl8_alloc(lpm); > > /* Check tbl8 allocation was successful. */ > if (tbl8_group_index < 0) { > @@ -569,7 +674,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, > } /* If valid entry but not extended calculate the index into Table8. */ > else if (lpm->tbl24[tbl24_index].valid_group == 0) { > /* Search for free tbl8 group. */ > - tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s); > + tbl8_group_index = tbl8_alloc(lpm); > > if (tbl8_group_index < 0) { > return tbl8_group_index; > @@ -977,7 +1082,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, > */ > lpm->tbl24[tbl24_index].valid = 0; > __atomic_thread_fence(__ATOMIC_RELEASE); > - tbl8_free(lpm->tbl8, tbl8_group_start); > + tbl8_free(lpm, tbl8_group_start); > } else if (tbl8_recycle_index > -1) { > /* Update tbl24 entry. */ > struct rte_lpm_tbl_entry new_tbl24_entry = { > @@ -993,7 +1098,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, > __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, > __ATOMIC_RELAXED); > __atomic_thread_fence(__ATOMIC_RELEASE); > - tbl8_free(lpm->tbl8, tbl8_group_start); > + tbl8_free(lpm, tbl8_group_start); > } > #undef group_idx > return 0; > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h > index b9d49ac87..7889f21b3 100644 > --- a/lib/librte_lpm/rte_lpm.h > +++ b/lib/librte_lpm/rte_lpm.h > @@ -1,5 +1,6 @@ > /* SPDX-License-Identifier: BSD-3-Clause > * Copyright(c) 2010-2014 Intel Corporation > + * Copyright(c) 2020 Arm Limited > */ > > #ifndef _RTE_LPM_H_ > @@ -20,6 +21,7 @@ > #include > #include > #include > +#include > > #ifdef __cplusplus > extern "C" { > @@ -62,6 +64,17 @@ extern "C" { > /** Bitmask used to indicate successful lookup */ > #define RTE_LPM_LOOKUP_SUCCESS 0x01000000 > > +/** @internal Default RCU defer queue entries to reclaim in one go. */ > +#define RTE_LPM_RCU_DQ_RECLAIM_MAX 16 > + > +/** RCU reclamation modes */ > +enum rte_lpm_qsbr_mode { > + /** Create defer queue for reclaim. */ > + RTE_LPM_QSBR_MODE_DQ = 0, > + /** Use blocking mode reclaim. No defer queue created. */ > + RTE_LPM_QSBR_MODE_SYNC > +}; > + > #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN > /** @internal Tbl24 entry structure. */ > __extension__ > @@ -130,6 +143,28 @@ struct rte_lpm { > __rte_cache_aligned; /**< LPM tbl24 table. */ > struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */ > struct rte_lpm_rule *rules_tbl; /**< LPM rules. */ > +#ifdef ALLOW_EXPERIMENTAL_API > + /* RCU config. */ > + struct rte_rcu_qsbr *v; /* RCU QSBR variable. */ > + enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */ > + struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */ > +#endif > +}; > + > +/** LPM RCU QSBR configuration structure. */ > +struct rte_lpm_rcu_config { > + struct rte_rcu_qsbr *v; /* RCU QSBR variable. */ > + /* Mode of RCU QSBR. RTE_LPM_QSBR_MODE_xxx > + * '0' for default: create defer queue for reclaim. > + */ > + enum rte_lpm_qsbr_mode mode; > + uint32_t dq_size; /* RCU defer queue size. > + * default: lpm->number_tbl8s. > + */ > + uint32_t reclaim_thd; /* Threshold to trigger auto reclaim. */ > + uint32_t reclaim_max; /* Max entries to reclaim in one go. > + * default: RTE_LPM_RCU_DQ_RECLAIM_MAX. > + */ > }; > > /** > @@ -179,6 +214,30 @@ rte_lpm_find_existing(const char *name); > void > rte_lpm_free(struct rte_lpm *lpm); > > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice > + * > + * Associate RCU QSBR variable with an LPM object. > + * > + * @param lpm > + * the lpm object to add RCU QSBR > + * @param cfg > + * RCU QSBR configuration > + * @param dq > + * handler of created RCU QSBR defer queue > + * @return > + * On success - 0 > + * On error - 1 with error code set in rte_errno. > + * Possible rte_errno codes are: > + * - EINVAL - invalid pointer > + * - EEXIST - already added QSBR > + * - ENOMEM - memory allocation failure > + */ > +__rte_experimental > +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg, > + struct rte_rcu_qsbr_dq **dq); > + > /** > * Add a rule to the LPM table. > * > diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map > index 500f58b80..bfccd7eac 100644 > --- a/lib/librte_lpm/rte_lpm_version.map > +++ b/lib/librte_lpm/rte_lpm_version.map > @@ -21,3 +21,9 @@ DPDK_20.0 { > > local: *; > }; > + > +EXPERIMENTAL { > + global: > + > + rte_lpm_rcu_qsbr_add; > +}; > Acked-by: Ray Kinsella