From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 437A0A00C5; Sat, 5 Nov 2022 14:40:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC4644067C; Sat, 5 Nov 2022 14:40:18 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id DA065400D5 for ; Sat, 5 Nov 2022 14:40:16 +0100 (CET) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: [RFC] mempool: zero-copy cache put bulk Date: Sat, 5 Nov 2022 14:40:10 +0100 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87489@smartserver.smartshare.dk> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: [RFC] mempool: zero-copy cache put bulk Thread-Index: AdjxHCA/ROrKGJlPRZqF8NgKVJugtw== From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: , , , X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Zero-copy access to the mempool cache is beneficial for PMD performance, = and must be provided by the mempool library to fix [Bug 1052] without a = performance regression. [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=3D1052 This RFC offers a conceptual zero-copy put function, where the = application promises to store some objects, and in return gets an = address where to store them. I would like some early feedback. Notes: * Allowing the 'cache' parameter to be NULL, and getting it from the = mempool instead, was inspired by rte_mempool_cache_flush(). * Asserting that the 'mp' parameter is not NULL is not done by other = functions, so I omitted it here too. NB: Please ignore formatting. Also, this code has not even been compile = tested. /** * Promise to put objects in a mempool via zero-copy access to a = user-owned mempool cache. * * @param cache * A pointer to the mempool cache. * @param mp * A pointer to the mempool. * @param n * The number of objects to be put in the mempool cache. * @return * The pointer to where to put the objects in the mempool cache. * NULL on error * with rte_errno set appropriately. */ static __rte_always_inline void * rte_mempool_cache_put_bulk_promise(struct rte_mempool_cache *cache, struct rte_mempool *mp, unsigned int n) { void **cache_objs; if (cache =3D=3D NULL) cache =3D rte_mempool_default_cache(mp, rte_lcore_id()); if (cache =3D=3D NULL) { rte_errno =3D EINVAL; return NULL; } rte_mempool_trace_cache_put_bulk_promise(cache, mp, n); /* The request itself is too big for the cache */ if (unlikely(n > cache->flushthresh)) { rte_errno =3D EINVAL; return NULL; } /* * The cache follows the following algorithm: * 1. If the objects cannot be added to the cache without crossing * the flush threshold, flush the cache to the backend. * 2. Add the objects to the cache. */ if (cache->len + n <=3D cache->flushthresh) { cache_objs =3D &cache->objs[cache->len]; cache->len +=3D n; } else { cache_objs =3D &cache->objs[0]; rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); cache->len =3D n; } RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); return cache_objs; } Med venlig hilsen / Kind regards, -Morten Br=F8rup