From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D046AA00C2; Fri, 14 Oct 2022 17:57:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7708342D89; Fri, 14 Oct 2022 17:57:48 +0200 (CEST) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id E0EBF42C6E for ; Fri, 14 Oct 2022 17:57:46 +0200 (CEST) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: RE: [PATCH v6 3/4] mempool: fix cache flushing algorithm Date: Fri, 14 Oct 2022 17:57:39 +0200 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D873E3@smartserver.smartshare.dk> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: [PATCH v6 3/4] mempool: fix cache flushing algorithm Thread-Index: Adjf1YMmIjNTtHg4Sx248WzOxo+IIQAClIZg References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> <20221009133737.795377-4-andrew.rybchenko@oktetlabs.ru> <98CBD80474FA8B44BF855DF32C47DC35D873B6@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D873B8@smartserver.smartshare.dk> From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: "Olivier Matz" Cc: "Andrew Rybchenko" , , "Bruce Richardson" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > From: Olivier Matz [mailto:olivier.matz@6wind.com] > Sent: Friday, 14 October 2022 16.01 >=20 > Hi Morten, Andrew, >=20 > On Sun, Oct 09, 2022 at 05:08:39PM +0200, Morten Br=F8rup wrote: > > > From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru] > > > Sent: Sunday, 9 October 2022 16.52 > > > > > > On 10/9/22 17:31, Morten Br=F8rup wrote: > > > >> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru] > > > >> Sent: Sunday, 9 October 2022 15.38 > > > >> > > > >> From: Morten Br=F8rup > > > >> > > > > [...] >=20 > I finally took a couple of hours to carefully review the mempool- > related > series (including the ones that have already been pushed). >=20 > The new behavior looks better to me in all situations I can think > about. Extreme care is required when touching a core library like the mempool. Thank you, Olivier. >=20 > > > > > >> --- a/lib/mempool/rte_mempool.h > > > >> +++ b/lib/mempool/rte_mempool.h > > > >> @@ -90,7 +90,7 @@ struct rte_mempool_cache { > > > >> * Cache is allocated to this size to allow it to overflow > in > > > >> certain > > > >> * cases to avoid needless emptying of cache. > > > >> */ > > > >> - void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache > objects */ > > > >> + void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2]; /**< Cache > objects */ > > > >> } __rte_cache_aligned; > > > > > > > > How much are we allowed to break the ABI here? > > > > > > > > This patch reduces the size of the structure by removing a now > unused > > > part at the end, which should be harmless. >=20 > It is an ABI breakage: an existing application will use the new 22.11 > function to create the mempool (with a smaller cache), but will use = the > old inlined get/put that can exceed MAX_SIZE x 2 will remain. >=20 > But this is a nice memory consumption improvement, in my opinion we > should accept it for 22.11 with an entry in the release note. >=20 >=20 > > > > > > > > If we may also move the position of the objs array, I would add > > > __rte_cache_aligned to the objs array. It makes no difference in > the > > > general case, but if get/put operations are always 32 objects, it > will > > > reduce the number of memory (or last level cache) accesses from > five to > > > four 64 B cache lines for every get/put operation. >=20 > Will it really be the case? Since cache->len has to be accessed too, > I don't think it would make a difference. Yes, the first cache line, containing cache->len, will be accessed = always. I forgot to count that; so the improvement by aligning = cache->objs will be five cache line accesses instead of six. Let me try to explain the scenario in other words: In an application where a mempool cache is only accessed in bursts of 32 = objects (256 B), it matters if those 256 B accesses in the mempool cache = start at a cache line aligned address or not. If cache line aligned, = accessing those 256 B in the mempool cache will only touch 4 cache = lines; if not, 5 cache lines will be touched. (For architectures with = 128 B cache line, it will be 2 instead of 3 touched cache lines per = mempool cache get/put operation in applications using only bursts of 32 = objects.) If we cache line align cache->objs, those bursts of 32 objects (256 B) = will be cache line aligned: Any address at cache->objs[N * 32 objects] = is cache line aligned if objs->objs[0] is cache line aligned. Currently, the cache->objs directly follows cache->len, which makes = cache->objs[0] cache line unaligned. If we decide to break the mempool cache ABI, we might as well include my = suggested cache line alignment performance improvement. It doesn't = degrade performance for mempool caches not only accessed in bursts of 32 = objects. >=20 >=20 > > > > > > > > uint32_t len; /**< Current cache count */ > > > > - /* > > > > - * Cache is allocated to this size to allow it to overflow > in > > > certain > > > > - * cases to avoid needless emptying of cache. > > > > - */ > > > > - void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache > objects */ > > > > + /** > > > > + * Cache objects > > > > + * > > > > + * Cache is allocated to this size to allow it to overflow > in > > > certain > > > > + * cases to avoid needless emptying of cache. > > > > + */ > > > > + void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2] > __rte_cache_aligned; > > > > } __rte_cache_aligned; > > > > > > I think aligning objs on cacheline should be a separate patch. > > > > Good point. I'll let you do it. :-) > > > > PS: Thank you for following up on this patch series, Andrew! >=20 > Many thanks for this rework. >=20 > Acked-by: Olivier Matz Perhaps Reviewed-by would be appropriate?