From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3BC841D56; Fri, 24 Feb 2023 13:13:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C527B40DDA; Fri, 24 Feb 2023 13:13:22 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id D0CCB40C35 for ; Fri, 24 Feb 2023 13:13:21 +0100 (CET) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: RE: Bug in rte_mempool_do_generic_get? Date: Fri, 24 Feb 2023 13:13:19 +0100 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87770@smartserver.smartshare.dk> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Bug in rte_mempool_do_generic_get? Thread-Index: AQHZR/oOTrogQubqxEK8xQ6tUeYFva7d9M2g References: From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: "Harris, James R" , X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > From: Harris, James R [mailto:james.r.harris@intel.com]=20 > Sent: Friday, 24 February 2023 04.03 >=20 > Hi, >=20 > I've tracked down a regression in SPDK to DPDK commit a2833ecc5 = ("mempool: fix get objects from mempool with cache"). The problem probably goes all the way back to the introduction of the = cache flush threshold, which effectively increased the cache size to 1.5 = times the configured cache size, in this commit: http://git.dpdk.org/dpdk/commit/lib/librte_mempool/rte_mempool.h?id=3Dea5= dd2744b90b330f07fd10f327ab99ef55c7266 It might even go further back. >=20 > Here's an example that demonstrates the problem: >=20 > Allocate mempool with 2048 buffers and cache size 256. > Core 0 allocates 512 buffers. Mempool pulls 512 + 256 buffers from = backing ring, returns 512 of them to caller, puts the other 256 in core = 0 cache. Backing ring now has 1280 buffers. > Core 1 allocates 512 buffers. Mempool pulls 512 + 256 buffers from = backing ring, returns 512 of them to caller, puts the other 256 in core = 1 cache. Backing ring now has 512 buffers. > Core 2 allocates 512 buffers. Mempool pulls remaining 512 buffers = from backing ring and returns all of them to caller. Backing ring now = has 0 buffers. > Core 3 tries to allocate 512 buffers and it fails. >=20 > In the SPDK case, we don't really need or use the mempool cache in = this case, so changing the cache size to 0 fixes the problem and is what = we're going to move forward with. If you are not making get/put requests smaller than the cache size, then = yes, having no cache is the best solution. >=20 > But the behavior did cause a regression so I thought I'd mention it = here. Thank you. > If you have a mempool with 2048 objects, shouldn't 4 cores each be = able to do a 512 buffer bulk get, regardless of the configured cache = size? No, the scenario you described above is the expected behavior. I think = it is documented somewhere that objects in the caches are unavailable = for other cores, but now I cannot find where this is documented. Furthermore, since the effective per-core cache size is 1.5 * configured = cache size, a configured cache size of 256 may leave up to 384 objects = in each per-core cache. With 4 cores, you can expect up to 3 * 384 =3D 1152 objects sitting in = the caches of other cores. If you want to be able to pull 512 objects = with each core, the pool size should be 4 * 512 + 1152 objects.