From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 411A145A1E; Tue, 24 Sep 2024 22:45:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E208B40274; Tue, 24 Sep 2024 22:45:36 +0200 (CEST) Received: from mail-oo1-f51.google.com (mail-oo1-f51.google.com [209.85.161.51]) by mails.dpdk.org (Postfix) with ESMTP id 6EF0340270 for ; Tue, 24 Sep 2024 22:45:34 +0200 (CEST) Received: by mail-oo1-f51.google.com with SMTP id 006d021491bc7-5e1c49f9b9aso2315888eaf.2 for ; Tue, 24 Sep 2024 13:45:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iol.unh.edu; s=unh-iol; t=1727210733; x=1727815533; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=j4U+t9cMi1l+aTbrLDebz7BFToP2KhaZ2LU+0kof4dA=; b=QA1XmlUL4AzFAad/p/vlWq0QYPjb6DXWKPwsuQdNxtDQNvp3xsQ2L4jxz4y2QbzGAz MaSuHoVcVj42sC9az2NtV4ZNTRScD6yfY2IbkY64T8ZEkbNmbLs3Rf5swbplKIdHixyY llGeqnc3LsmjYRGN4Pm7lo515BitHt94usr6Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727210733; x=1727815533; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=j4U+t9cMi1l+aTbrLDebz7BFToP2KhaZ2LU+0kof4dA=; b=MqtTv0t5G62oXSZEQ8zm2b1i5w+IGV6rn5v0tB9P5O+0wl4RgM733Vkt0G5DdkI8NO 2oZzf+OfqvgeSRADTFQ8RhchvTZ0GRyhz0gSfdGxbpX1NoWulIiPkUitUqG8nYL8uWac IY3sqwRjmIzHMatvwS8CLGVK/W4/eAZmcCMymSY9XCZNrO3AquyBM0Bp+OApyxmNAET8 09euFIKmnWdk9V8ae0qj7QxqeSzlBLnfpz8mF8cRPAj+NK2kkAvL109dM6uZi/tIEpGA 1lCrVh+TIFzgU/T4VqaB/RTy4b5BYq3ymgdtQPEIF/rixQDqJYLe077OesBZPnfTxbnU N5Ew== X-Gm-Message-State: AOJu0YybjsZ8+hOJN+OV6sM1peCnIuqoJa+TIVHVzFOBaRPouJVzsaCx 5cV0Mv/J1DwPpEj//NookK9jiZt+VZEeGZZaUw4rjkJ1+DvpjRidQRpx7y1LNcplbByXVGXZMYn h6DAYW3RUqU8ZYAjItouGqPFoDF2RfYm3vCB/dQ== X-Google-Smtp-Source: AGHT+IHhgL47nS6oRkFBl9lofNz6RkAiIQzq85kwEAmsizX/rrkYv4RRyzaBU8oeJOmJrwBGWocVnOY7OdOLDME8gCg= X-Received: by 2002:a05:6871:289c:b0:25e:1cdf:c604 with SMTP id 586e51a60fabf-286e15ff590mr692755fac.31.1727210733148; Tue, 24 Sep 2024 13:45:33 -0700 (PDT) MIME-Version: 1.0 References: <20240920163203.840770-1-mb@smartsharesystems.com> <20240924181224.1562346-1-mb@smartsharesystems.com> In-Reply-To: <20240924181224.1562346-1-mb@smartsharesystems.com> From: Patrick Robb Date: Tue, 24 Sep 2024 16:44:29 -0400 Message-ID: Subject: Re: [RFC PATCH v7] mempool: fix mempool cache size To: =?UTF-8?Q?Morten_Br=C3=B8rup?= Cc: dev@dpdk.org, =?UTF-8?Q?Mattias_R=C3=B6nnblom?= Content-Type: multipart/alternative; boundary="00000000000033b3ff0622e39670" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --00000000000033b3ff0622e39670 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Recheck-request: iol-intel-Performance On Tue, Sep 24, 2024 at 2:12=E2=80=AFPM Morten Br=C3=B8rup wrote: > This patch refactors the mempool cache to fix two bugs: > 1. When a mempool is created with a cache size of N objects, the cache wa= s > actually created with a size of 1.5 * N objects. > 2. The mempool cache field names did not reflect their purpose; > the "flushthresh" field held the size, and the "size" field held the > number of objects remaining in the cache when returning from a get > operation refilling it from the backend. > > Especially the first item could be fatal: > When more objects than a mempool's configured cache size is held in the > mempool's caches associated with other lcores, a rightsized mempool may > unexpectedly run out of objects, causing the application to fail. > > Furthermore, this patch introduces two optimizations: > 1. The mempool caches are flushed to/filled from the backend in their > entirety, so backend accesses are CPU cache line aligned. (Assuming the > mempool cache size is a multiplum of a CPU cache line size divided by the > size of a pointer.) > 2. The unlikely paths in the get and put functions, where the cache is > flushed to/filled from the backend, are moved from the inline functions t= o > separate helper functions, thereby reducing the code size of the inline > functions. > Note: Accessing the backend for cacheless mempools remains inline. > > Various drivers accessing the mempool directly have been updated > accordingly. > These drivers did not update mempool statistics when accessing the mempoo= l > directly, so that is fixed too. > > Note: Performance not yet benchmarked. > > Signed-off-by: Morten Br=C3=B8rup > --- > v7: > * Increased max mempool cache size from 512 to 1024 objects. > Mainly for CI performance test purposes. > Originally, the max mempool cache size was 768 objects, and used a fixe= d > size array of 1024 objects in the mempool cache structure. > v6: > * Fix v5 incomplete implementation of passing large requests directly to > the backend. > * Use memcpy instead of rte_memcpy where compiler complains about it. > * Added const to some function parameters. > v5: > * Moved helper functions back into the header file, for improved > performance. > * Pass large requests directly to the backend. This also simplifies the > code. > v4: > * Updated subject to reflect that misleading names are considered bugs. > * Rewrote patch description to provide more details about the bugs fixed. > (Mattias R=C3=B6nnblom) > * Moved helper functions, not to be inlined, to mempool C file. > (Mattias R=C3=B6nnblom) > * Pass requests for n >=3D RTE_MEMPOOL_CACHE_MAX_SIZE objects known at bu= ild > time directly to backend driver, to avoid calling the helper functions. > This also fixes the compiler warnings about out of bounds array access. > v3: > * Removed __attribute__(assume). > v2: > * Removed mempool perf test; not part of patch set. > --- > config/rte_config.h | 2 +- > drivers/common/idpf/idpf_common_rxtx_avx512.c | 54 +--- > drivers/mempool/dpaa/dpaa_mempool.c | 16 +- > drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 14 - > drivers/net/i40e/i40e_rxtx_vec_avx512.c | 17 +- > drivers/net/iavf/iavf_rxtx_vec_avx512.c | 27 +- > drivers/net/ice/ice_rxtx_vec_avx512.c | 27 +- > lib/mempool/mempool_trace.h | 1 - > lib/mempool/rte_mempool.c | 12 +- > lib/mempool/rte_mempool.h | 287 ++++++++++++------ > 10 files changed, 232 insertions(+), 225 deletions(-) > > diff --git a/config/rte_config.h b/config/rte_config.h > index dd7bb0d35b..2488ff167d 100644 > --- a/config/rte_config.h > +++ b/config/rte_config.h > @@ -56,7 +56,7 @@ > #define RTE_CONTIGMEM_DEFAULT_BUF_SIZE (512*1024*1024) > > /* mempool defines */ > -#define RTE_MEMPOOL_CACHE_MAX_SIZE 512 > +#define RTE_MEMPOOL_CACHE_MAX_SIZE 1024 > /* RTE_LIBRTE_MEMPOOL_STATS is not set */ > /* RTE_LIBRTE_MEMPOOL_DEBUG is not set */ > > diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c > b/drivers/common/idpf/idpf_common_rxtx_avx512.c > index 3b5e124ec8..98535a48f3 100644 > --- a/drivers/common/idpf/idpf_common_rxtx_avx512.c > +++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c > @@ -1024,21 +1024,13 @@ idpf_tx_singleq_free_bufs_avx512(struct > idpf_tx_queue *txq) > > rte_lcore_id()); > void **cache_objs; > > - if (cache =3D=3D NULL || cache->len =3D=3D 0) > - goto normal; > - > - cache_objs =3D &cache->objs[cache->len]; > - > - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { > - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n)= ; > + if (!cache || unlikely(n + cache->len > cache->size)) { > + rte_mempool_generic_put(mp, (void *)txep, n, > cache); > goto done; > } > > - /* The cache follows the following algorithm > - * 1. Add the objects to the cache > - * 2. Anything greater than the cache min value (if it > crosses the > - * cache flush threshold) is flushed to the ring. > - */ > + cache_objs =3D &cache->objs[cache->len]; > + > /* Add elements back into the cache */ > uint32_t copied =3D 0; > /* n is multiple of 32 */ > @@ -1056,16 +1048,13 @@ idpf_tx_singleq_free_bufs_avx512(struct > idpf_tx_queue *txq) > } > cache->len +=3D n; > > - if (cache->len >=3D cache->flushthresh) { > - rte_mempool_ops_enqueue_bulk(mp, > - > &cache->objs[cache->size], > - cache->len - > cache->size); > - cache->len =3D cache->size; > - } > + /* Increment stat. */ > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); > + > goto done; > } > > -normal: > m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); > if (likely(m !=3D NULL)) { > free[0] =3D m; > @@ -1335,21 +1324,13 @@ idpf_tx_splitq_free_bufs_avx512(struct > idpf_tx_queue *txq) > > rte_lcore_id()); > void **cache_objs; > > - if (!cache || cache->len =3D=3D 0) > - goto normal; > - > - cache_objs =3D &cache->objs[cache->len]; > - > - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { > - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n)= ; > + if (!cache || unlikely(n + cache->len > cache->size)) { > + rte_mempool_generic_put(mp, (void *)txep, n, > cache); > goto done; > } > > - /* The cache follows the following algorithm > - * 1. Add the objects to the cache > - * 2. Anything greater than the cache min value (if it > crosses the > - * cache flush threshold) is flushed to the ring. > - */ > + cache_objs =3D &cache->objs[cache->len]; > + > /* Add elements back into the cache */ > uint32_t copied =3D 0; > /* n is multiple of 32 */ > @@ -1367,16 +1348,13 @@ idpf_tx_splitq_free_bufs_avx512(struct > idpf_tx_queue *txq) > } > cache->len +=3D n; > > - if (cache->len >=3D cache->flushthresh) { > - rte_mempool_ops_enqueue_bulk(mp, > - > &cache->objs[cache->size], > - cache->len - > cache->size); > - cache->len =3D cache->size; > - } > + /* Increment stat. */ > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); > + > goto done; > } > > -normal: > m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); > if (likely(m)) { > free[0] =3D m; > diff --git a/drivers/mempool/dpaa/dpaa_mempool.c > b/drivers/mempool/dpaa/dpaa_mempool.c > index 74bfcab509..3a936826c8 100644 > --- a/drivers/mempool/dpaa/dpaa_mempool.c > +++ b/drivers/mempool/dpaa/dpaa_mempool.c > @@ -51,8 +51,6 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp) > struct bman_pool_params params =3D { > .flags =3D BMAN_POOL_FLAG_DYNAMIC_BPID > }; > - unsigned int lcore_id; > - struct rte_mempool_cache *cache; > > MEMPOOL_INIT_FUNC_TRACE(); > > @@ -120,18 +118,6 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp) > rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid], > sizeof(struct dpaa_bp_info)); > mp->pool_data =3D (void *)bp_info; > - /* Update per core mempool cache threshold to optimal value which > is > - * number of buffers that can be released to HW buffer pool in > - * a single API call. > - */ > - for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { > - cache =3D &mp->local_cache[lcore_id]; > - DPAA_MEMPOOL_DEBUG("lCore %d: cache->flushthresh %d -> %d= ", > - lcore_id, cache->flushthresh, > - (uint32_t)(cache->size + DPAA_MBUF_MAX_ACQ_REL)); > - if (cache->flushthresh) > - cache->flushthresh =3D cache->size + > DPAA_MBUF_MAX_ACQ_REL; > - } > > DPAA_MEMPOOL_INFO("BMAN pool created for bpid =3D%d", bpid); > return 0; > @@ -234,7 +220,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool, > DPAA_MEMPOOL_DPDEBUG("Request to alloc %d buffers in bpid =3D %d"= , > count, bp_info->bpid); > > - if (unlikely(count >=3D (RTE_MEMPOOL_CACHE_MAX_SIZE * 2))) { > + if (unlikely(count >=3D RTE_MEMPOOL_CACHE_MAX_SIZE)) { > DPAA_MEMPOOL_ERR("Unable to allocate requested (%u) > buffers", > count); > return -1; > diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c > b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c > index 42e17d984c..a44f3cf616 100644 > --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c > +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c > @@ -44,8 +44,6 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp) > struct dpaa2_bp_info *bp_info; > struct dpbp_attr dpbp_attr; > uint32_t bpid; > - unsigned int lcore_id; > - struct rte_mempool_cache *cache; > int ret; > > avail_dpbp =3D dpaa2_alloc_dpbp_dev(); > @@ -134,18 +132,6 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp) > DPAA2_MEMPOOL_DEBUG("BP List created for bpid =3D%d", > dpbp_attr.bpid); > > h_bp_list =3D bp_list; > - /* Update per core mempool cache threshold to optimal value which > is > - * number of buffers that can be released to HW buffer pool in > - * a single API call. > - */ > - for (lcore_id =3D 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { > - cache =3D &mp->local_cache[lcore_id]; > - DPAA2_MEMPOOL_DEBUG("lCore %d: cache->flushthresh %d -> > %d", > - lcore_id, cache->flushthresh, > - (uint32_t)(cache->size + DPAA2_MBUF_MAX_ACQ_REL))= ; > - if (cache->flushthresh) > - cache->flushthresh =3D cache->size + > DPAA2_MBUF_MAX_ACQ_REL; > - } > > return 0; > err3: > diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c > b/drivers/net/i40e/i40e_rxtx_vec_avx512.c > index 0238b03f8a..712ab1726f 100644 > --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c > +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c > @@ -783,18 +783,13 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) > struct rte_mempool_cache *cache =3D > rte_mempool_default_cache(mp, > rte_lcore_id()); > > - if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) { > + if (!cache || unlikely(n + cache->len > cache->size)) { > rte_mempool_generic_put(mp, (void *)txep, n, > cache); > goto done; > } > > cache_objs =3D &cache->objs[cache->len]; > > - /* The cache follows the following algorithm > - * 1. Add the objects to the cache > - * 2. Anything greater than the cache min value (if it > - * crosses the cache flush threshold) is flushed to the > ring. > - */ > /* Add elements back into the cache */ > uint32_t copied =3D 0; > /* n is multiple of 32 */ > @@ -812,12 +807,10 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) > } > cache->len +=3D n; > > - if (cache->len >=3D cache->flushthresh) { > - rte_mempool_ops_enqueue_bulk > - (mp, &cache->objs[cache->size], > - cache->len - cache->size); > - cache->len =3D cache->size; > - } > + /* Increment stat. */ > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); > + > goto done; > } > > diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c > b/drivers/net/iavf/iavf_rxtx_vec_avx512.c > index 3bb6f305df..307bb8556a 100644 > --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c > +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c > @@ -1873,21 +1873,13 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *tx= q) > > rte_lcore_id()); > void **cache_objs; > > - if (!cache || cache->len =3D=3D 0) > - goto normal; > - > - cache_objs =3D &cache->objs[cache->len]; > - > - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { > - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n)= ; > + if (!cache || unlikely(n + cache->len > cache->size)) { > + rte_mempool_generic_put(mp, (void *)txep, n, > cache); > goto done; > } > > - /* The cache follows the following algorithm > - * 1. Add the objects to the cache > - * 2. Anything greater than the cache min value (if it > crosses the > - * cache flush threshold) is flushed to the ring. > - */ > + cache_objs =3D &cache->objs[cache->len]; > + > /* Add elements back into the cache */ > uint32_t copied =3D 0; > /* n is multiple of 32 */ > @@ -1905,16 +1897,13 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *tx= q) > } > cache->len +=3D n; > > - if (cache->len >=3D cache->flushthresh) { > - rte_mempool_ops_enqueue_bulk(mp, > - > &cache->objs[cache->size], > - cache->len - > cache->size); > - cache->len =3D cache->size; > - } > + /* Increment stat. */ > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); > + > goto done; > } > > -normal: > m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); > if (likely(m)) { > free[0] =3D m; > diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c > b/drivers/net/ice/ice_rxtx_vec_avx512.c > index 04148e8ea2..4ea1db734e 100644 > --- a/drivers/net/ice/ice_rxtx_vec_avx512.c > +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c > @@ -888,21 +888,13 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq) > struct rte_mempool_cache *cache =3D > rte_mempool_default_cache(mp, > rte_lcore_id()); > > - if (!cache || cache->len =3D=3D 0) > - goto normal; > - > - cache_objs =3D &cache->objs[cache->len]; > - > - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { > - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n)= ; > + if (!cache || unlikely(n + cache->len > cache->size)) { > + rte_mempool_generic_put(mp, (void *)txep, n, > cache); > goto done; > } > > - /* The cache follows the following algorithm > - * 1. Add the objects to the cache > - * 2. Anything greater than the cache min value (if it > - * crosses the cache flush threshold) is flushed to the > ring. > - */ > + cache_objs =3D &cache->objs[cache->len]; > + > /* Add elements back into the cache */ > uint32_t copied =3D 0; > /* n is multiple of 32 */ > @@ -920,16 +912,13 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq) > } > cache->len +=3D n; > > - if (cache->len >=3D cache->flushthresh) { > - rte_mempool_ops_enqueue_bulk > - (mp, &cache->objs[cache->size], > - cache->len - cache->size); > - cache->len =3D cache->size; > - } > + /* Increment stat. */ > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); > + > goto done; > } > > -normal: > m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); > if (likely(m)) { > free[0] =3D m; > diff --git a/lib/mempool/mempool_trace.h b/lib/mempool/mempool_trace.h > index dffef062e4..3c49b41a6d 100644 > --- a/lib/mempool/mempool_trace.h > +++ b/lib/mempool/mempool_trace.h > @@ -112,7 +112,6 @@ RTE_TRACE_POINT( > rte_trace_point_emit_i32(socket_id); > rte_trace_point_emit_ptr(cache); > rte_trace_point_emit_u32(cache->len); > - rte_trace_point_emit_u32(cache->flushthresh); > ) > > RTE_TRACE_POINT( > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c > index d8e39e5c20..40fb13239a 100644 > --- a/lib/mempool/rte_mempool.c > +++ b/lib/mempool/rte_mempool.c > @@ -50,11 +50,6 @@ static void > mempool_event_callback_invoke(enum rte_mempool_event event, > struct rte_mempool *mp); > > -/* Note: avoid using floating point since that compiler > - * may not think that is constant. > - */ > -#define CALC_CACHE_FLUSHTHRESH(c) (((c) * 3) / 2) > - > #if defined(RTE_ARCH_X86) > /* > * return the greatest common divisor between a and b (fast algorithm) > @@ -746,13 +741,12 @@ rte_mempool_free(struct rte_mempool *mp) > static void > mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size) > { > - /* Check that cache have enough space for flush threshold */ > - > RTE_BUILD_BUG_ON(CALC_CACHE_FLUSHTHRESH(RTE_MEMPOOL_CACHE_MAX_SIZE) > > + /* Check that cache have enough space for size */ > + RTE_BUILD_BUG_ON(RTE_MEMPOOL_CACHE_MAX_SIZE > > RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs)= / > RTE_SIZEOF_FIELD(struct rte_mempool_cache, > objs[0])); > > cache->size =3D size; > - cache->flushthresh =3D CALC_CACHE_FLUSHTHRESH(size); > cache->len =3D 0; > } > > @@ -836,7 +830,7 @@ rte_mempool_create_empty(const char *name, unsigned n= , > unsigned elt_size, > > /* asked cache too big */ > if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE || > - CALC_CACHE_FLUSHTHRESH(cache_size) > n) { > + cache_size > n) { > rte_errno =3D EINVAL; > return NULL; > } > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index 7bdc92b812..0801cec24a 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h > @@ -89,10 +89,8 @@ struct __rte_cache_aligned rte_mempool_debug_stats { > */ > struct __rte_cache_aligned rte_mempool_cache { > uint32_t size; /**< Size of the cache */ > - uint32_t flushthresh; /**< Threshold before we flush excess > elements */ > uint32_t len; /**< Current cache count */ > #ifdef RTE_LIBRTE_MEMPOOL_STATS > - uint32_t unused; > /* > * Alternative location for the most frequently updated mempool > statistics (per-lcore), > * providing faster update access when using a mempool cache. > @@ -110,7 +108,7 @@ struct __rte_cache_aligned rte_mempool_cache { > * Cache is allocated to this size to allow it to overflow in > certain > * cases to avoid needless emptying of cache. > */ > - alignas(RTE_CACHE_LINE_SIZE) void *objs[RTE_MEMPOOL_CACHE_MAX_SIZ= E > * 2]; > + alignas(RTE_CACHE_LINE_SIZE) void > *objs[RTE_MEMPOOL_CACHE_MAX_SIZE]; > }; > > /** > @@ -1362,6 +1360,48 @@ rte_mempool_cache_flush(struct rte_mempool_cache > *cache, > cache->len =3D 0; > } > > +/** > + * @internal Put several objects back in the mempool; used internally wh= en > + * the number of objects exceeds the remaining space in the mempool > cache. > + * @param mp > + * A pointer to the mempool structure. > + * @param obj_table > + * A pointer to a table of void * pointers (objects). > + * @param n > + * The number of objects to store back in the mempool, must be strictl= y > + * positive. > + * Must be more than the remaining space in the mempool cache, i.e.: > + * cache->len + n > cache->size > + * Must be less than the size of the mempool cache, i.e.: > + * n < cache->size > + * @param cache > + * A pointer to a mempool cache structure. Not NULL. > + */ > +static void > +rte_mempool_do_generic_put_split(struct rte_mempool *mp, void * const > *obj_table, > + unsigned int n, struct rte_mempool_cache * const cache) > +{ > + void **cache_objs; > + unsigned int len; > + const uint32_t cache_size =3D cache->size; > + > + /* Fill the cache with the first objects. */ > + cache_objs =3D &cache->objs[cache->len]; > + len =3D (cache_size - cache->len); > + cache->len =3D n - len; /* Moved to here (for performance). */ > + /* rte_ */ memcpy(cache_objs, obj_table, sizeof(void *) * len); > + obj_table +=3D len; > + n -=3D len; > + > + /* Flush the entire cache to the backend. */ > + cache_objs =3D &cache->objs[0]; > + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache_size); > + > + /* Add the remaining objects to the cache. */ > + /* Moved from here (for performance): cache->len =3D n; */ > + /* rte_ */ memcpy(cache_objs, obj_table, sizeof(void *) * n); > +} > + > /** > * @internal Put several objects back in the mempool; used internally. > * @param mp > @@ -1376,52 +1416,44 @@ rte_mempool_cache_flush(struct rte_mempool_cache > *cache, > */ > static __rte_always_inline void > rte_mempool_do_generic_put(struct rte_mempool *mp, void * const > *obj_table, > - unsigned int n, struct rte_mempool_cache *cach= e) > + unsigned int n, struct rte_mempool_cache * > const cache) > { > - void **cache_objs; > - > - /* No cache provided */ > + /* No cache provided? */ > if (unlikely(cache =3D=3D NULL)) > goto driver_enqueue; > > - /* increment stat now, adding in mempool always success */ > + /* Increment stats now, adding in mempool always succeeds. */ > RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); > > - /* The request itself is too big for the cache */ > - if (unlikely(n > cache->flushthresh)) > + /* The request itself is known to be too big for any cache? */ > + if (__rte_constant(n) && n >=3D RTE_MEMPOOL_CACHE_MAX_SIZE) > goto driver_enqueue_stats_incremented; > > - /* > - * The cache follows the following algorithm: > - * 1. If the objects cannot be added to the cache without > crossing > - * the flush threshold, flush the cache to the backend. > - * 2. Add the objects to the cache. > - */ > + /* Enough remaining space in the cache? */ > + if (likely(cache->len + n <=3D cache->size)) { > + void **cache_objs; > > - if (cache->len + n <=3D cache->flushthresh) { > + /* Add the objects to the cache. */ > cache_objs =3D &cache->objs[cache->len]; > cache->len +=3D n; > - } else { > - cache_objs =3D &cache->objs[0]; > - rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); > - cache->len =3D n; > - } > - > - /* Add the objects to the cache. */ > - rte_memcpy(cache_objs, obj_table, sizeof(void *) * n); > + rte_memcpy(cache_objs, obj_table, sizeof(void *) * n); > + } else if (likely(n < cache->size)) > + rte_mempool_do_generic_put_split(mp, obj_table, n, cache)= ; > + else > + goto driver_enqueue_stats_incremented; > > return; > > driver_enqueue: > > - /* increment stat now, adding in mempool always success */ > + /* Increment stats now, adding in mempool always succeeds. */ > RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); > RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); > > driver_enqueue_stats_incremented: > > - /* push objects to the backend */ > + /* Push the objects directly to the backend. */ > rte_mempool_ops_enqueue_bulk(mp, obj_table, n); > } > > @@ -1490,122 +1522,183 @@ rte_mempool_put(struct rte_mempool *mp, void > *obj) > } > > /** > - * @internal Get several objects from the mempool; used internally. > + * @internal Get several objects from the mempool; used internally when > + * the number of objects exceeds what is available in the mempool cach= e. > * @param mp > * A pointer to the mempool structure. > * @param obj_table > * A pointer to a table of void * pointers (objects). > * @param n > * The number of objects to get, must be strictly positive. > + * Must be more than available in the mempool cache, i.e.: > + * n > cache->len > * @param cache > - * A pointer to a mempool cache structure. May be NULL if not needed. > + * A pointer to a mempool cache structure. Not NULL. > * @return > * - 0: Success. > * - <0: Error; code of driver dequeue function. > */ > -static __rte_always_inline int > -rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > - unsigned int n, struct rte_mempool_cache *cach= e) > +static int > +rte_mempool_do_generic_get_split(struct rte_mempool *mp, void **obj_tabl= e, > + unsigned int n, struct rte_mempool_cache * const cache) > { > int ret; > unsigned int remaining; > uint32_t index, len; > void **cache_objs; > + const uint32_t cache_size =3D cache->size; > > - /* No cache provided */ > - if (unlikely(cache =3D=3D NULL)) { > - remaining =3D n; > - goto driver_dequeue; > - } > - > - /* The cache is a stack, so copy will be in reverse order. */ > + /* Serve the first part of the request from the cache to return > hot objects first. */ > cache_objs =3D &cache->objs[cache->len]; > + len =3D cache->len; > + remaining =3D n - len; > + for (index =3D 0; index < len; index++) > + *obj_table++ =3D *--cache_objs; > > - if (__rte_constant(n) && n <=3D cache->len) { > + /* At this point, the cache is empty. */ > + > + /* More than can be served from a full cache? */ > + if (unlikely(remaining >=3D cache_size)) { > /* > - * The request size is known at build time, and > - * the entire request can be satisfied from the cache, > - * so let the compiler unroll the fixed length copy loop. > + * Serve the following part of the request directly from > the backend > + * in multipla of the cache size. > */ > - cache->len -=3D n; > - for (index =3D 0; index < n; index++) > - *obj_table++ =3D *--cache_objs; > + len =3D remaining - remaining % cache_size; > + ret =3D rte_mempool_ops_dequeue_bulk(mp, obj_table, len); > + if (unlikely(ret < 0)) { > + /* > + * No further action is required to roll back the > request, > + * as objects in the cache are intact, and no > objects have > + * been dequeued from the backend. > + */ > > - RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); > - RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); > > - return 0; > - } > + return ret; > + } > > - /* > - * Use the cache as much as we have to return hot objects first. > - * If the request size 'n' is known at build time, the above > comparison > - * ensures that n > cache->len here, so omit RTE_MIN(). > - */ > - len =3D __rte_constant(n) ? cache->len : RTE_MIN(n, cache->len); > - cache->len -=3D len; > - remaining =3D n - len; > - for (index =3D 0; index < len; index++) > - *obj_table++ =3D *--cache_objs; > + remaining -=3D len; > + obj_table +=3D len; > > - /* > - * If the request size 'n' is known at build time, the case > - * where the entire request can be satisfied from the cache > - * has already been handled above, so omit handling it here. > - */ > - if (!__rte_constant(n) && remaining =3D=3D 0) { > - /* The entire request is satisfied from the cache. */ > + if (unlikely(remaining =3D=3D 0)) { > + cache->len =3D 0; > > - RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); > - RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, > get_success_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, > get_success_objs, n); > > - return 0; > + return 0; > + } > } > > - /* if dequeue below would overflow mem allocated for cache */ > - if (unlikely(remaining > RTE_MEMPOOL_CACHE_MAX_SIZE)) > - goto driver_dequeue; > - > - /* Fill the cache from the backend; fetch size + remaining > objects. */ > - ret =3D rte_mempool_ops_dequeue_bulk(mp, cache->objs, > - cache->size + remaining); > + /* Fill the entire cache from the backend. */ > + ret =3D rte_mempool_ops_dequeue_bulk(mp, cache->objs, cache_size)= ; > if (unlikely(ret < 0)) { > /* > - * We are buffer constrained, and not able to allocate > - * cache + remaining. > - * Do not fill the cache, just satisfy the remaining part > of > - * the request directly from the backend. > + * Unable to fill the cache. > + * Last resort: Try only the remaining part of the reques= t, > + * served directly from the backend. > */ > - goto driver_dequeue; > + ret =3D rte_mempool_ops_dequeue_bulk(mp, obj_table, > remaining); > + if (unlikely(ret =3D=3D 0)) { > + cache->len =3D 0; > + > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, > get_success_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, > get_success_objs, n); > + > + return 0; > + } > + > + /* Roll back. */ > + if (cache->len + remaining =3D=3D n) { > + /* > + * No further action is required to roll back the > request, > + * as objects in the cache are intact, and no > objects have > + * been dequeued from the backend. > + */ > + } else { > + /* Update the state of the cache before putting > back the objects. */ > + cache->len =3D 0; > + > + len =3D n - remaining; > + obj_table -=3D len; > + rte_mempool_do_generic_put(mp, obj_table, len, > cache); > + } > + > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); > + > + return ret; > } > > - /* Satisfy the remaining part of the request from the filled > cache. */ > - cache_objs =3D &cache->objs[cache->size + remaining]; > + /* Serve the remaining part of the request from the filled cache. > */ > + cache_objs =3D &cache->objs[cache_size]; > for (index =3D 0; index < remaining; index++) > *obj_table++ =3D *--cache_objs; > > - cache->len =3D cache->size; > + cache->len =3D cache_size - remaining; > > RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); > RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); > > return 0; > +} > > -driver_dequeue: > +/** > + * @internal Get several objects from the mempool; used internally. > + * @param mp > + * A pointer to the mempool structure. > + * @param obj_table > + * A pointer to a table of void * pointers (objects). > + * @param n > + * The number of objects to get, must be strictly positive. > + * @param cache > + * A pointer to a mempool cache structure. May be NULL if not needed. > + * @return > + * - 0: Success. > + * - <0: Error; code of driver dequeue function. > + */ > +static __rte_always_inline int > +rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > + unsigned int n, struct rte_mempool_cache * > const cache) > +{ > + int ret; > > - /* Get remaining objects directly from the backend. */ > - ret =3D rte_mempool_ops_dequeue_bulk(mp, obj_table, remaining); > + /* No cache provided? */ > + if (unlikely(cache =3D=3D NULL)) > + goto driver_dequeue; > > - if (ret < 0) { > - if (likely(cache !=3D NULL)) { > - cache->len =3D n - remaining; > - /* > - * No further action is required to roll the firs= t > part > - * of the request back into the cache, as objects > in > - * the cache are intact. > - */ > - } > + /* The request itself is known to be too big for any cache? */ > + if (__rte_constant(n) && n >=3D RTE_MEMPOOL_CACHE_MAX_SIZE) > + goto driver_dequeue; > + > + /* The request can be served entirely from the cache? */ > + if (likely(n <=3D cache->len)) { > + unsigned int index; > + void **cache_objs; > > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); > + > + /* > + * The cache is a stack, so copy will be in reverse order= . > + * If the request size is known at build time, > + * the compiler will unroll the fixed length copy loop. > + */ > + cache_objs =3D &cache->objs[cache->len]; > + cache->len -=3D n; > + for (index =3D 0; index < n; index++) > + *obj_table++ =3D *--cache_objs; > + > + return 0; > + } else > + return rte_mempool_do_generic_get_split(mp, obj_table, n, > cache); > + > +driver_dequeue: > + > + /* Get the objects directly from the backend. */ > + ret =3D rte_mempool_ops_dequeue_bulk(mp, obj_table, n); > + if (unlikely(ret < 0)) { > RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); > RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); > } else { > -- > 2.43.0 > > --00000000000033b3ff0622e39670 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Recheck-request:=C2=A0iol-intel-Performance

On Tue, Sep 24,= 2024 at 2:12=E2=80=AFPM Morten Br=C3=B8rup <mb@smartsharesystems.com> wrote:
This patch refactors the mempool c= ache to fix two bugs:
1. When a mempool is created with a cache size of N objects, the cache was<= br> actually created with a size of 1.5 * N objects.
2. The mempool cache field names did not reflect their purpose;
the "flushthresh" field held the size, and the "size" f= ield held the
number of objects remaining in the cache when returning from a get
operation refilling it from the backend.

Especially the first item could be fatal:
When more objects than a mempool's configured cache size is held in the=
mempool's caches associated with other lcores, a rightsized mempool may=
unexpectedly run out of objects, causing the application to fail.

Furthermore, this patch introduces two optimizations:
1. The mempool caches are flushed to/filled from the backend in their
entirety, so backend accesses are CPU cache line aligned. (Assuming the
mempool cache size is a multiplum of a CPU cache line size divided by the size of a pointer.)
2. The unlikely paths in the get and put functions, where the cache is
flushed to/filled from the backend, are moved from the inline functions to<= br> separate helper functions, thereby reducing the code size of the inline
functions.
Note: Accessing the backend for cacheless mempools remains inline.

Various drivers accessing the mempool directly have been updated
accordingly.
These drivers did not update mempool statistics when accessing the mempool<= br> directly, so that is fixed too.

Note: Performance not yet benchmarked.

Signed-off-by: Morten Br=C3=B8rup <mb@smartsharesystems.com>
---
v7:
* Increased max mempool cache size from 512 to 1024 objects.
=C2=A0 Mainly for CI performance test purposes.
=C2=A0 Originally, the max mempool cache size was 768 objects, and used a f= ixed
=C2=A0 size array of 1024 objects in the mempool cache structure.
v6:
* Fix v5 incomplete implementation of passing large requests directly to =C2=A0 the backend.
* Use memcpy instead of rte_memcpy where compiler complains about it.
* Added const to some function parameters.
v5:
* Moved helper functions back into the header file, for improved
=C2=A0 performance.
* Pass large requests directly to the backend. This also simplifies the
=C2=A0 code.
v4:
* Updated subject to reflect that misleading names are considered bugs.
* Rewrote patch description to provide more details about the bugs fixed. =C2=A0 (Mattias R=C3=B6nnblom)
* Moved helper functions, not to be inlined, to mempool C file.
=C2=A0 (Mattias R=C3=B6nnblom)
* Pass requests for n >=3D RTE_MEMPOOL_CACHE_MAX_SIZE objects known at b= uild
=C2=A0 time directly to backend driver, to avoid calling the helper functio= ns.
=C2=A0 This also fixes the compiler warnings about out of bounds array acce= ss.
v3:
* Removed __attribute__(assume).
v2:
* Removed mempool perf test; not part of patch set.
---
=C2=A0config/rte_config.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 =C2=A02 +-
=C2=A0drivers/common/idpf/idpf_common_rxtx_avx512.c |=C2=A0 54 +---
=C2=A0drivers/mempool/dpaa/dpaa_mempool.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0|=C2=A0 16 +-
=C2=A0drivers/mempool/dpaa2/dpaa2_hw_mempool.c=C2=A0 =C2=A0 =C2=A0 |=C2=A0 = 14 -
=C2=A0drivers/net/i40e/i40e_rxtx_vec_avx512.c=C2=A0 =C2=A0 =C2=A0 =C2=A0|= =C2=A0 17 +-
=C2=A0drivers/net/iavf/iavf_rxtx_vec_avx512.c=C2=A0 =C2=A0 =C2=A0 =C2=A0|= =C2=A0 27 +-
=C2=A0drivers/net/ice/ice_rxtx_vec_avx512.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0|=C2=A0 27 +-
=C2=A0lib/mempool/mempool_trace.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 =C2=A01 -
=C2=A0lib/mempool/rte_mempool.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 12 +-
=C2=A0lib/mempool/rte_mempool.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 287 ++++++++++++------
=C2=A010 files changed, 232 insertions(+), 225 deletions(-)

diff --git a/config/rte_config.h b/config/rte_config.h
index dd7bb0d35b..2488ff167d 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -56,7 +56,7 @@
=C2=A0#define RTE_CONTIGMEM_DEFAULT_BUF_SIZE (512*1024*1024)

=C2=A0/* mempool defines */
-#define RTE_MEMPOOL_CACHE_MAX_SIZE 512
+#define RTE_MEMPOOL_CACHE_MAX_SIZE 1024
=C2=A0/* RTE_LIBRTE_MEMPOOL_STATS is not set */
=C2=A0/* RTE_LIBRTE_MEMPOOL_DEBUG is not set */

diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common= /idpf/idpf_common_rxtx_avx512.c
index 3b5e124ec8..98535a48f3 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -1024,21 +1024,13 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_que= ue *txq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 rte_lcore_id());
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 void **cache_objs;<= br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache =3D=3D NU= LL || cache->len =3D=3D 0)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0goto normal;
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (n > RTE_MEMP= OOL_CACHE_MAX_SIZE) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || unlik= ely(n + cache->len > cache->size)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_generic_put(mp, (void *)txep, n, cache);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* The cache follow= s the following algorithm
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A01. A= dd the objects to the cache
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A02. A= nything greater than the cache min value (if it crosses the
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A0cach= e flush threshold) is flushed to the ring.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Add elements bac= k into the cache */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t copied =3D= 0;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* n is multiple of= 32 */
@@ -1056,16 +1048,13 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_que= ue *txq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len +=3D = n;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len &= gt;=3D cache->flushthresh) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &cache->objs[cache->size],
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len - cache->size);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D cache->size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stat. = */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_objs, n);
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-normal:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (likely(m !=3D NULL)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 free[0] =3D m;
@@ -1335,21 +1324,13 @@ idpf_tx_splitq_free_bufs_avx512(struct idpf_tx_queu= e *txq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 rte_lcore_id());
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 void **cache_objs;<= br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || cache= ->len =3D=3D 0)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0goto normal;
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (n > RTE_MEMP= OOL_CACHE_MAX_SIZE) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || unlik= ely(n + cache->len > cache->size)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_generic_put(mp, (void *)txep, n, cache);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* The cache follow= s the following algorithm
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A01. A= dd the objects to the cache
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A02. A= nything greater than the cache min value (if it crosses the
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A0cach= e flush threshold) is flushed to the ring.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Add elements bac= k into the cache */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t copied =3D= 0;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* n is multiple of= 32 */
@@ -1367,16 +1348,13 @@ idpf_tx_splitq_free_bufs_avx512(struct idpf_tx_queu= e *txq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len +=3D = n;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len &= gt;=3D cache->flushthresh) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &cache->objs[cache->size],
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len - cache->size);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D cache->size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stat. = */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_objs, n);
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-normal:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (likely(m)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 free[0] =3D m;
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpa= a_mempool.c
index 74bfcab509..3a936826c8 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -51,8 +51,6 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct bman_pool_params params =3D {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 .flags =3D BMAN_POO= L_FLAG_DYNAMIC_BPID
=C2=A0 =C2=A0 =C2=A0 =C2=A0 };
-=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int lcore_id;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0struct rte_mempool_cache *cache;

=C2=A0 =C2=A0 =C2=A0 =C2=A0 MEMPOOL_INIT_FUNC_TRACE();

@@ -120,18 +118,6 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_= info[bpid],
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sizeof= (struct dpaa_bp_info));
=C2=A0 =C2=A0 =C2=A0 =C2=A0 mp->pool_data =3D (void *)bp_info;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Update per core mempool cache threshold to o= ptimal value which is
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * number of buffers that can be released to HW= buffer pool in
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * a single API call.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0for (lcore_id =3D 0; lcore_id < RTE_MAX_LCOR= E; lcore_id++) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache =3D &mp-&= gt;local_cache[lcore_id];
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0DPAA_MEMPOOL_DEBUG(= "lCore %d: cache->flushthresh %d -> %d",
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0lcore_id, cache->flushthresh,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0(uint32_t)(cache->size + DPAA_MBUF_MAX_ACQ_REL));
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->flush= thresh)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->flushthresh =3D cache->size + DPAA_MBUF_MAX_ACQ_REL;=
-=C2=A0 =C2=A0 =C2=A0 =C2=A0}

=C2=A0 =C2=A0 =C2=A0 =C2=A0 DPAA_MEMPOOL_INFO("BMAN pool created for b= pid =3D%d", bpid);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
@@ -234,7 +220,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 DPAA_MEMPOOL_DPDEBUG("Request to alloc %d = buffers in bpid =3D %d",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count, bp_info->bpid);

-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(count >=3D (RTE_MEMPOOL_CACHE_M= AX_SIZE * 2))) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(count >=3D RTE_MEMPOOL_CACHE_MA= X_SIZE)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 DPAA_MEMPOOL_ERR(&q= uot;Unable to allocate requested (%u) buffers",
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0count);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpa= a2/dpaa2_hw_mempool.c
index 42e17d984c..a44f3cf616 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -44,8 +44,6 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct dpaa2_bp_info *bp_info;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct dpbp_attr dpbp_attr;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t bpid;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int lcore_id;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0struct rte_mempool_cache *cache;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 int ret;

=C2=A0 =C2=A0 =C2=A0 =C2=A0 avail_dpbp =3D dpaa2_alloc_dpbp_dev();
@@ -134,18 +132,6 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 DPAA2_MEMPOOL_DEBUG("BP List created for b= pid =3D%d", dpbp_attr.bpid);

=C2=A0 =C2=A0 =C2=A0 =C2=A0 h_bp_list =3D bp_list;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Update per core mempool cache threshold to o= ptimal value which is
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * number of buffers that can be released to HW= buffer pool in
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * a single API call.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0for (lcore_id =3D 0; lcore_id < RTE_MAX_LCOR= E; lcore_id++) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache =3D &mp-&= gt;local_cache[lcore_id];
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0DPAA2_MEMPOOL_DEBUG= ("lCore %d: cache->flushthresh %d -> %d",
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0lcore_id, cache->flushthresh,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0(uint32_t)(cache->size + DPAA2_MBUF_MAX_ACQ_REL));
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->flush= thresh)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->flushthresh =3D cache->size + DPAA2_MBUF_MAX_ACQ_REL= ;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0}

=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
=C2=A0err3:
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40= e_rxtx_vec_avx512.c
index 0238b03f8a..712ab1726f 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -783,18 +783,13 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct rte_mempool_= cache *cache =3D rte_mempool_default_cache(mp,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_lcore_id());

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || n >= ; RTE_MEMPOOL_CACHE_MAX_SIZE) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || unlik= ely(n + cache->len > cache->size)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 rte_mempool_generic_put(mp, (void *)txep, n, cache);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache_objs =3D &= ;cache->objs[cache->len];

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* The cache follow= s the following algorithm
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A01. A= dd the objects to the cache
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A02. A= nything greater than the cache min value (if it
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A0cros= ses the cache flush threshold) is flushed to the ring.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Add elements bac= k into the cache */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t copied =3D= 0;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* n is multiple of= 32 */
@@ -812,12 +807,10 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len +=3D = n;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len &= gt;=3D cache->flushthresh) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(mp, &cache->objs[cache->si= ze],
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len - cache->size);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D cache->size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stat. = */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_objs, n);
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iav= f_rxtx_vec_avx512.c
index 3bb6f305df..307bb8556a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1873,21 +1873,13 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 rte_lcore_id());
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 void **cache_objs;<= br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || cache= ->len =3D=3D 0)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0goto normal;
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (n > RTE_MEMP= OOL_CACHE_MAX_SIZE) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || unlik= ely(n + cache->len > cache->size)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_generic_put(mp, (void *)txep, n, cache);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* The cache follow= s the following algorithm
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A01. A= dd the objects to the cache
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A02. A= nything greater than the cache min value (if it crosses the
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A0cach= e flush threshold) is flushed to the ring.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Add elements bac= k into the cache */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t copied =3D= 0;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* n is multiple of= 32 */
@@ -1905,16 +1897,13 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len +=3D = n;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len &= gt;=3D cache->flushthresh) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &cache->objs[cache->size],
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len - cache->size);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D cache->size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stat. = */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_objs, n);
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-normal:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (likely(m)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 free[0] =3D m;
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rx= tx_vec_avx512.c
index 04148e8ea2..4ea1db734e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -888,21 +888,13 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct rte_mempool_= cache *cache =3D rte_mempool_default_cache(mp,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_lcore_id());

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || cache= ->len =3D=3D 0)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0goto normal;
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (n > RTE_MEMP= OOL_CACHE_MAX_SIZE) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!cache || unlik= ely(n + cache->len > cache->size)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_generic_put(mp, (void *)txep, n, cache);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* The cache follow= s the following algorithm
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A01. A= dd the objects to the cache
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A02. A= nything greater than the cache min value (if it
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A0cros= ses the cache flush threshold) is flushed to the ring.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Add elements bac= k into the cache */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t copied =3D= 0;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* n is multiple of= 32 */
@@ -920,16 +912,13 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len +=3D = n;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len &= gt;=3D cache->flushthresh) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_ops_enqueue_bulk
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(mp, &cache->objs[cache->si= ze],
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len - cache->size);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D cache->size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stat. = */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, put_objs, n);
+
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto done;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-normal:
=C2=A0 =C2=A0 =C2=A0 =C2=A0 m =3D rte_pktmbuf_prefree_seg(txep[0].mbuf); =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (likely(m)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 free[0] =3D m;
diff --git a/lib/mempool/mempool_trace.h b/lib/mempool/mempool_trace.h
index dffef062e4..3c49b41a6d 100644
--- a/lib/mempool/mempool_trace.h
+++ b/lib/mempool/mempool_trace.h
@@ -112,7 +112,6 @@ RTE_TRACE_POINT(
=C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_trace_point_emit_i32(socket_id);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_trace_point_emit_ptr(cache);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_trace_point_emit_u32(cache->len);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0rte_trace_point_emit_u32(cache->flushthresh)= ;
=C2=A0)

=C2=A0RTE_TRACE_POINT(
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index d8e39e5c20..40fb13239a 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -50,11 +50,6 @@ static void
=C2=A0mempool_event_callback_invoke(enum rte_mempool_event event,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct rte_mempool *mp);

-/* Note: avoid using floating point since that compiler
- * may not think that is constant.
- */
-#define CALC_CACHE_FLUSHTHRESH(c) (((c) * 3) / 2)
-
=C2=A0#if defined(RTE_ARCH_X86)
=C2=A0/*
=C2=A0 * return the greatest common divisor between a and b (fast algorithm= )
@@ -746,13 +741,12 @@ rte_mempool_free(struct rte_mempool *mp)
=C2=A0static void
=C2=A0mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size) =C2=A0{
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Check that cache have enough space for flush= threshold */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_BUILD_BUG_ON(CALC_CACHE_FLUSHTHRESH(RTE_MEM= POOL_CACHE_MAX_SIZE) >
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Check that cache have enough space for size = */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_BUILD_BUG_ON(RTE_MEMPOOL_CACHE_MAX_SIZE >= ;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs) /
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs[0]));

=C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->size =3D size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0cache->flushthresh =3D CALC_CACHE_FLUSHTHRES= H(size);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len =3D 0;
=C2=A0}

@@ -836,7 +830,7 @@ rte_mempool_create_empty(const char *name, unsigned n, = unsigned elt_size,

=C2=A0 =C2=A0 =C2=A0 =C2=A0 /* asked cache too big */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE = ||
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0CALC_CACHE_FLUSHTHRESH(cache_size= ) > n) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_size > n) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_errno =3D EINVA= L;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return NULL;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 7bdc92b812..0801cec24a 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -89,10 +89,8 @@ struct __rte_cache_aligned rte_mempool_debug_stats {
=C2=A0 */
=C2=A0struct __rte_cache_aligned rte_mempool_cache {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t size;=C2=A0 =C2=A0 =C2=A0 =C2=A0 /**&l= t; Size of the cache */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t flushthresh; /**< Threshold before = we flush excess elements */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t len;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= /**< Current cache count */
=C2=A0#ifdef RTE_LIBRTE_MEMPOOL_STATS
-=C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t unused;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 /*
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* Alternative location for the most frequ= ently updated mempool statistics (per-lcore),
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* providing faster update access when usi= ng a mempool cache.
@@ -110,7 +108,7 @@ struct __rte_cache_aligned rte_mempool_cache {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* Cache is allocated to this size to allo= w it to overflow in certain
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* cases to avoid needless emptying of cac= he.
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/
-=C2=A0 =C2=A0 =C2=A0 =C2=A0alignas(RTE_CACHE_LINE_SIZE) void *objs[RTE_MEM= POOL_CACHE_MAX_SIZE * 2];
+=C2=A0 =C2=A0 =C2=A0 =C2=A0alignas(RTE_CACHE_LINE_SIZE) void *objs[RTE_MEM= POOL_CACHE_MAX_SIZE];
=C2=A0};

=C2=A0/**
@@ -1362,6 +1360,48 @@ rte_mempool_cache_flush(struct rte_mempool_cache *ca= che,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len =3D 0;
=C2=A0}

+/**
+ * @internal Put several objects back in the mempool; used internally when=
+ *=C2=A0 =C2=A0the number of objects exceeds the remaining space in the me= mpool cache.
+ * @param mp
+ *=C2=A0 =C2=A0A pointer to the mempool structure.
+ * @param obj_table
+ *=C2=A0 =C2=A0A pointer to a table of void * pointers (objects).
+ * @param n
+ *=C2=A0 =C2=A0The number of objects to store back in the mempool, must be= strictly
+ *=C2=A0 =C2=A0positive.
+ *=C2=A0 =C2=A0Must be more than the remaining space in the mempool cache,= i.e.:
+ *=C2=A0 =C2=A0cache->len + n > cache->size
+ *=C2=A0 =C2=A0Must be less than the size of the mempool cache, i.e.:
+ *=C2=A0 =C2=A0n < cache->size
+ * @param cache
+ *=C2=A0 =C2=A0A pointer to a mempool cache structure. Not NULL.
+ */
+static void
+rte_mempool_do_generic_put_split(struct rte_mempool *mp, void * const *obj= _table,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int n, str= uct rte_mempool_cache * const cache)
+{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0void **cache_objs;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int len;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0const uint32_t cache_size =3D cache->size; +
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Fill the cache with the first objects. */ +=C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &cache->objs[cache->le= n];
+=C2=A0 =C2=A0 =C2=A0 =C2=A0len =3D (cache_size - cache->len);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len =3D n - len; /* Moved to here (fo= r performance). */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* rte_ */ memcpy(cache_objs, obj_table, sizeof= (void *) * len);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0obj_table +=3D len;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0n -=3D len;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Flush the entire cache to the backend. */ +=C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &cache->objs[0];
+=C2=A0 =C2=A0 =C2=A0 =C2=A0rte_mempool_ops_enqueue_bulk(mp, cache_objs, ca= che_size);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Add the remaining objects to the cache. */ +=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Moved from here (for performance): cache->= ;len =3D n; */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* rte_ */ memcpy(cache_objs, obj_table, sizeof= (void *) * n);
+}
+
=C2=A0/**
=C2=A0 * @internal Put several objects back in the mempool; used internally= .
=C2=A0 * @param mp
@@ -1376,52 +1416,44 @@ rte_mempool_cache_flush(struct rte_mempool_cache *c= ache,
=C2=A0 */
=C2=A0static __rte_always_inline void
=C2=A0rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_= table,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 unsigned int n, struct rte_mempool_cache *cache)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 unsigned int n, struct rte_mempool_cache * const cache) =C2=A0{
-=C2=A0 =C2=A0 =C2=A0 =C2=A0void **cache_objs;
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* No cache provided */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* No cache provided? */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (unlikely(cache =3D=3D NULL))
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto driver_enqueue= ;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* increment stat now, adding in mempool always= success */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stats now, adding in mempool alway= s succeeds. */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);=

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* The request itself is too big for the cache = */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(n > cache->flushthresh))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* The request itself is known to be too big fo= r any cache? */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (__rte_constant(n) && n >=3D RTE_= MEMPOOL_CACHE_MAX_SIZE)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 goto driver_enqueue= _stats_incremented;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/*
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * The cache follows the following algorithm: -=C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A01. If the objects cannot be adde= d to the cache without crossing
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A0 =C2=A0 the flush threshold, flu= sh the cache to the backend.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 *=C2=A0 =C2=A02. Add the objects to the cache.=
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Enough remaining space in the cache? */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (likely(cache->len + n <=3D cache->= size)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0void **cache_objs;<= br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len + n <=3D cache->flushth= resh) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Add the objects = to the cache. */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache_objs =3D &= ;cache->objs[cache->len];
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cache->len +=3D = n;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0} else {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[0];
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rte_mempool_ops_enq= ueue_bulk(mp, cache_objs, cache->len);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len =3D n= ;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0}
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Add the objects to the cache. */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0rte_memcpy(cache_objs, obj_table, sizeof(void *= ) * n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rte_memcpy(cache_ob= js, obj_table, sizeof(void *) * n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0} else if (likely(n < cache->size))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rte_mempool_do_gene= ric_put_split(mp, obj_table, n, cache);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0else
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto driver_enqueue= _stats_incremented;

=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;

=C2=A0driver_enqueue:

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* increment stat now, adding in mempool always= success */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Increment stats now, adding in mempool alway= s succeeds. */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);

=C2=A0driver_enqueue_stats_incremented:

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* push objects to the backend */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Push the objects directly to the backend. */=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 rte_mempool_ops_enqueue_bulk(mp, obj_table, n);=
=C2=A0}

@@ -1490,122 +1522,183 @@ rte_mempool_put(struct rte_mempool *mp, void *obj= )
=C2=A0}

=C2=A0/**
- * @internal Get several objects from the mempool; used internally.
+ * @internal Get several objects from the mempool; used internally when + *=C2=A0 =C2=A0the number of objects exceeds what is available in the memp= ool cache.
=C2=A0 * @param mp
=C2=A0 *=C2=A0 =C2=A0A pointer to the mempool structure.
=C2=A0 * @param obj_table
=C2=A0 *=C2=A0 =C2=A0A pointer to a table of void * pointers (objects).
=C2=A0 * @param n
=C2=A0 *=C2=A0 =C2=A0The number of objects to get, must be strictly positiv= e.
+ *=C2=A0 =C2=A0Must be more than available in the mempool cache, i.e.:
+ *=C2=A0 =C2=A0n > cache->len
=C2=A0 * @param cache
- *=C2=A0 =C2=A0A pointer to a mempool cache structure. May be NULL if not = needed.
+ *=C2=A0 =C2=A0A pointer to a mempool cache structure. Not NULL.
=C2=A0 * @return
=C2=A0 *=C2=A0 =C2=A0- 0: Success.
=C2=A0 *=C2=A0 =C2=A0- <0: Error; code of driver dequeue function.
=C2=A0 */
-static __rte_always_inline int
-rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 unsigned int n, struct rte_mempool_cache *cache)
+static int
+rte_mempool_do_generic_get_split(struct rte_mempool *mp, void **obj_table,=
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int n, str= uct rte_mempool_cache * const cache)
=C2=A0{
=C2=A0 =C2=A0 =C2=A0 =C2=A0 int ret;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned int remaining;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t index, len;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 void **cache_objs;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0const uint32_t cache_size =3D cache->size;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* No cache provided */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(cache =3D=3D NULL)) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0remaining =3D n; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto driver_dequeue= ;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0}
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* The cache is a stack, so copy will be in rev= erse order. */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Serve the first part of the request from the= cache to return hot objects first. */
=C2=A0 =C2=A0 =C2=A0 =C2=A0 cache_objs =3D &cache->objs[cache->le= n];
+=C2=A0 =C2=A0 =C2=A0 =C2=A0len =3D cache->len;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0remaining =3D n - len;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0for (index =3D 0; index < len; index++)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*obj_table++ =3D *-= -cache_objs;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (__rte_constant(n) && n <=3D cach= e->len) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* At this point, the cache is empty. */
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* More than can be served from a full cache? *= /
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(remaining >=3D cache_size)) { =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /*
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * The request size= is known at build time, and
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * the entire reque= st can be satisfied from the cache,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * so let the compi= ler unroll the fixed length copy loop.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Serve the follow= ing part of the request directly from the backend
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * in multipla of t= he cache size.
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len -=3D = n;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for (index =3D 0; i= ndex < n; index++)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0*obj_table++ =3D *--cache_objs;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0len =3D remaining -= remaining % cache_size;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D rte_mempool= _ops_dequeue_bulk(mp, obj_table, len);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(ret &l= t; 0)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/*
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * No further action is required to roll back the request,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * as objects in the cache are intact, and no objects have
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * been dequeued from the backend.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 */

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, get_success_bulk, 1);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, get_success_objs, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0return ret;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/*
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * Use the cache as much as we have to return h= ot objects first.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * If the request size 'n' is known at = build time, the above comparison
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * ensures that n > cache->len here, so o= mit RTE_MIN().
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0len =3D __rte_constant(n) ? cache->len : RTE= _MIN(n, cache->len);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len -=3D len;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0remaining =3D n - len;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0for (index =3D 0; index < len; index++)
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*obj_table++ =3D *-= -cache_objs;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0remaining -=3D len;=
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0obj_table +=3D len;=

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/*
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * If the request size 'n' is known at = build time, the case
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * where the entire request can be satisfied fr= om the cache
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 * has already been handled above, so omit hand= ling it here.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!__rte_constant(n) && remaining =3D= =3D 0) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* The entire reque= st is satisfied from the cache. */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(remain= ing =3D=3D 0)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D 0;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, get_success_bulk, 1);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, get_success_objs, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n);

-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0return 0;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* if dequeue below would overflow mem allocate= d for cache */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(remaining > RTE_MEMPOOL_CACHE_M= AX_SIZE))
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto driver_dequeue= ;
-
-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Fill the cache from the backend; fetch size = + remaining objects. */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D rte_mempool_ops_dequeue_bulk(mp, cache-= >objs,
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->size + remaining);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Fill the entire cache from the backend. */ +=C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D rte_mempool_ops_dequeue_bulk(mp, cache-= >objs, cache_size);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (unlikely(ret < 0)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /*
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * We are buffer co= nstrained, and not able to allocate
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * cache + remainin= g.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Do not fill the = cache, just satisfy the remaining part of
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * the request dire= ctly from the backend.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Unable to fill t= he cache.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * Last resort: Try= only the remaining part of the request,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * served directly = from the backend.
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto driver_dequeue= ;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D rte_mempool= _ops_dequeue_bulk(mp, obj_table, remaining);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(ret = =3D=3D 0)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D 0;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0return 0;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Roll back. */ +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (cache->len += remaining =3D=3D n) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/*
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * No further action is required to roll back the request,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * as objects in the cache are intact, and no objects have
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * been dequeued from the backend.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0} else {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/* Update the state of the cache before putting back the objects.= */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D 0;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0len =3D n - remaining;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0obj_table -=3D len;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0rte_mempool_do_generic_put(mp, obj_table, len, cache);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_STAT_AD= D(mp, get_fail_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_STAT_AD= D(mp, get_fail_objs, n);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return ret;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Satisfy the remaining part of the request fr= om the filled cache. */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &cache->objs[cache->si= ze + remaining];
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Serve the remaining part of the request from= the filled cache. */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &cache->objs[cache_size];=
=C2=A0 =C2=A0 =C2=A0 =C2=A0 for (index =3D 0; index < remaining; index++= )
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *obj_table++ =3D *-= -cache_objs;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len =3D cache->size;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len =3D cache_size - remaining;

=C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_b= ulk, 1);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_o= bjs, n);

=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
+}

-driver_dequeue:
+/**
+ * @internal Get several objects from the mempool; used internally.
+ * @param mp
+ *=C2=A0 =C2=A0A pointer to the mempool structure.
+ * @param obj_table
+ *=C2=A0 =C2=A0A pointer to a table of void * pointers (objects).
+ * @param n
+ *=C2=A0 =C2=A0The number of objects to get, must be strictly positive. + * @param cache
+ *=C2=A0 =C2=A0A pointer to a mempool cache structure. May be NULL if not = needed.
+ * @return
+ *=C2=A0 =C2=A0- 0: Success.
+ *=C2=A0 =C2=A0- <0: Error; code of driver dequeue function.
+ */
+static __rte_always_inline int
+rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 unsigned int n, struct rte_mempool_cache * const cache) +{
+=C2=A0 =C2=A0 =C2=A0 =C2=A0int ret;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Get remaining objects directly from the back= end. */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D rte_mempool_ops_dequeue_bulk(mp, obj_ta= ble, remaining);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* No cache provided? */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(cache =3D=3D NULL))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto driver_dequeue= ;

-=C2=A0 =C2=A0 =C2=A0 =C2=A0if (ret < 0) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (likely(cache != =3D NULL)) {
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0cache->len =3D n - remaining;
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0/*
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * No further action is required to roll the first part
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * of the request back into the cache, as objects in
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 * the cache are intact.
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 */
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* The request itself is known to be too big fo= r any cache? */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (__rte_constant(n) && n >=3D RTE_= MEMPOOL_CACHE_MAX_SIZE)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto driver_dequeue= ;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* The request can be served entirely from the = cache? */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (likely(n <=3D cache->len)) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned int index;=
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0void **cache_objs;<= br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, get_success_bulk, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RTE_MEMPOOL_CACHE_S= TAT_ADD(cache, get_success_objs, n);
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/*
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * The cache is a s= tack, so copy will be in reverse order.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * If the request s= ize is known at build time,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * the compiler wil= l unroll the fixed length copy loop.
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache_objs =3D &= ;cache->objs[cache->len];
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0cache->len -=3D = n;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for (index =3D 0; i= ndex < n; index++)
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0*obj_table++ =3D *--cache_objs;
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
+=C2=A0 =C2=A0 =C2=A0 =C2=A0} else
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return rte_mempool_= do_generic_get_split(mp, obj_table, n, cache);
+
+driver_dequeue:
+
+=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Get the objects directly from the backend. *= /
+=C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D rte_mempool_ops_dequeue_bulk(mp, obj_ta= ble, n);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(ret < 0)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_STAT_AD= D(mp, get_fail_bulk, 1);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 RTE_MEMPOOL_STAT_AD= D(mp, get_fail_objs, n);
=C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {
--
2.43.0

--00000000000033b3ff0622e39670--