From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0EB76A04FD; Tue, 27 Dec 2022 10:24:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A987410FC; Tue, 27 Dec 2022 10:24:14 +0100 (CET) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 368B140E2D for ; Tue, 27 Dec 2022 10:24:13 +0100 (CET) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id CC4C367; Tue, 27 Dec 2022 12:24:11 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru CC4C367 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1672133051; bh=YIr4Xk4gO+C/fsJX7uOR4gI1ezf+q88kflLE+EvKzFY=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=TZRBa9FSXlzjNUc3+Q9nnJHaXTEtTmjDSSWGXjek1O1e7HRuOY97YOkRQj3vnSKLW ctt8u5vBv1KZQ4a2gkFur/ILivlBsy9jLhKYXUDhkwKSrsaOz/ETmDFOxnS6RQghGk zBNXgKXeILWzvNSIG0VZE9p/J3XMqhvGvUPwJ9mY= Message-ID: Date: Tue, 27 Dec 2022 12:24:09 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.0 Subject: Re: [PATCH v4] mempool cache: add zero-copy get and put functions Content-Language: en-US To: =?UTF-8?Q?Morten_Br=c3=b8rup?= , olivier.matz@6wind.com, honnappa.nagarahalli@arm.com, kamalakshitha.aligeri@arm.com, bruce.richardson@intel.com, konstantin.ananyev@huawei.com, dev@dpdk.org Cc: nd@arm.com References: <98CBD80474FA8B44BF855DF32C47DC35D87488@smartserver.smartshare.dk> <20221224115514.33476-1-mb@smartsharesystems.com> From: Andrew Rybchenko Organization: OKTET Labs In-Reply-To: <20221224115514.33476-1-mb@smartsharesystems.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 12/24/22 14:55, Morten Brørup wrote: > Zero-copy access to mempool caches is beneficial for PMD performance, and > must be provided by the mempool library to fix [Bug 1052] without a > performance regression. > > [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052 Bugzilla ID: 1052 > > v4: > * Fix checkpatch warnings. > v3: > * Bugfix: Respect the cache size; compare to the flush threshold instead > of RTE_MEMPOOL_CACHE_MAX_SIZE. > * Added 'rewind' function for incomplete 'put' operations. (Konstantin) > * Replace RTE_ASSERTs with runtime checks of the request size. > Instead of failing, return NULL if the request is too big. (Konstantin) > * Modified comparison to prevent overflow if n is really huge and len is > non-zero. > * Updated the comments in the code. > v2: > * Fix checkpatch warnings. > * Fix missing registration of trace points. > * The functions are inline, so they don't go into the map file. > v1 changes from the RFC: > * Removed run-time parameter checks. (Honnappa) > This is a hot fast path function; requiring correct application > behaviour, i.e. function parameters must be valid. > * Added RTE_ASSERT for parameters instead. > Code for this is only generated if built with RTE_ENABLE_ASSERT. > * Removed fallback when 'cache' parameter is not set. (Honnappa) > * Chose the simple get function; i.e. do not move the existing objects in > the cache to the top of the new stack, just leave them at the bottom. > * Renamed the functions. Other suggestions are welcome, of course. ;-) > * Updated the function descriptions. > * Added the functions to trace_fp and version.map. > > Signed-off-by: Morten Brørup [snip] > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index 9f530db24b..00387e7543 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h [snip] > @@ -1346,6 +1347,170 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache, > cache->len = 0; > } > > +/** > + * @warning > + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. > + * > + * Zero-copy put objects in a user-owned mempool cache backed by the specified mempool. > + * > + * @param cache > + * A pointer to the mempool cache. > + * @param mp > + * A pointer to the mempool. > + * @param n > + * The number of objects to be put in the mempool cache. > + * @return > + * The pointer to where to put the objects in the mempool cache. > + * NULL if the request itself is too big for the cache, i.e. > + * exceeds the cache flush threshold. > + */ > +__rte_experimental > +static __rte_always_inline void * > +rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache, > + struct rte_mempool *mp, > + unsigned int n) > +{ > + void **cache_objs; > + > + RTE_ASSERT(cache != NULL); > + RTE_ASSERT(mp != NULL); > + > + rte_mempool_trace_cache_zc_put_bulk(cache, mp, n); Wouldn't it be better to do tracing nearby return value to trace return value as well? > + > + if (n <= cache->flushthresh - cache->len) { > + /* > + * The objects can be added to the cache without crossing the > + * flush threshold. > + */ > + cache_objs = &cache->objs[cache->len]; > + cache->len += n; > + } else if (likely(n <= cache->flushthresh)) { > + /* > + * The request itself fits into the cache. > + * But first, the cache must be flushed to the backend, so > + * adding the objects does not cross the flush threshold. > + */ > + cache_objs = &cache->objs[0]; > + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); > + cache->len = n; > + } else { > + /* The request itself is too big for the cache. */ > + return NULL; > + } > + > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); It duplicates put function code. Shouldn't put function use this one to avoid duplication? > + > + return cache_objs; > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. > + * > + * Zero-copy un-put objects in a user-owned mempool cache. > + * > + * @param cache > + * A pointer to the mempool cache. > + * @param n > + * The number of objects not put in the mempool cache after calling > + * rte_mempool_cache_zc_put_bulk(). > + */ > +__rte_experimental > +static __rte_always_inline void > +rte_mempool_cache_zc_put_rewind(struct rte_mempool_cache *cache, > + unsigned int n) > +{ > + RTE_ASSERT(cache != NULL); > + RTE_ASSERT(n <= cache->len); > + > + rte_mempool_trace_cache_zc_put_rewind(cache, n); > + > + cache->len -= n; > + > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, (int)-n); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. > + * > + * Zero-copy get objects from a user-owned mempool cache backed by the specified mempool. > + * > + * @param cache > + * A pointer to the mempool cache. > + * @param mp > + * A pointer to the mempool. > + * @param n > + * The number of objects to prefetch into the mempool cache. > + * @return > + * The pointer to the objects in the mempool cache. > + * NULL on error; i.e. the cache + the pool does not contain 'n' objects. > + * With rte_errno set to the error code of the mempool dequeue function, > + * or EINVAL if the request itself is too big for the cache, i.e. > + * exceeds the cache flush threshold. > + */ > +__rte_experimental > +static __rte_always_inline void * > +rte_mempool_cache_zc_get_bulk(struct rte_mempool_cache *cache, > + struct rte_mempool *mp, > + unsigned int n) > +{ > + unsigned int len; > + > + RTE_ASSERT(cache != NULL); > + RTE_ASSERT(mp != NULL); > + > + rte_mempool_trace_cache_zc_get_bulk(cache, mp, n); > + > + len = cache->len; > + > + if (n <= len) { > + /* The request can be satisfied from the cache as is. */ > + len -= n; > + } else if (likely(n <= cache->flushthresh)) { > + /* > + * The request itself can be satisfied from the cache. > + * But first, the cache must be filled from the backend; > + * fetch size + requested - len objects. > + */ > + int ret; > + const unsigned int size = cache->size; > + > + ret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], size + n - len); If size is RTE_MEMPOOL_CACHE_MAX_SIZE and n is cache->flushthres (i.e. RTE_MEMPOOL_CACHE_MAX_SIZE * 1.5), we'll dequeue up to RTE_MEMPOOL_CACHE_MAX_SIZE * 2.5 objects whereas cache objects size is just RTE_MEMPOOL_CACHE_MAX_SIZE * 2. Am I missing something? > + if (unlikely(ret < 0)) { > + /* > + * We are buffer constrained. > + * Do not fill the cache, just satisfy the request. > + */ > + ret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], n - len); > + if (unlikely(ret < 0)) { > + /* Unable to satisfy the request. */ > + > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); > + > + rte_errno = -ret; > + return NULL; > + } > + > + len = 0; > + } else > + len = size; Curly brackets are required in else branch since if branch above has curly brackets. > + } else { > + /* The request itself is too big for the cache. */ > + rte_errno = EINVAL; > + return NULL; > + } > + > + cache->len = len; > + > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); > + > + return &cache->objs[len]; > +} > + > /** > * @internal Put several objects back in the mempool; used internally. > * @param mp