From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C6D8A0543; Tue, 4 Oct 2022 14:57:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C8CD440DFB; Tue, 4 Oct 2022 14:57:53 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id B7E9E40DDC for ; Tue, 4 Oct 2022 14:57:51 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 8DB365D; Tue, 4 Oct 2022 15:57:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 8DB365D DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1664888270; bh=JK3v1evDuCXXxgWgJLdbsmHbRHJrHVxtpTkl1sBhKx4=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=d1LOSaSq1sK0D7euIDNniID5egZcRwknSQKEtjakBvQjfioT8Vv2gY3bZO3JapF8m /zNyfF/pBX2mcwb6bUqlRFfQanHA082U7XWuP5hA4bYFwKBgpqSp1BD/BFJ8tVaVJq IZkVAyTvhDRHPyGLE/6kWw6Pb9GHVyIvzVwlh57M= Message-ID: <43c333f7-83fa-3072-8db1-8b3a6ece5999@oktetlabs.ru> Date: Tue, 4 Oct 2022 15:57:47 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.0 Subject: Re: [PATCH v2] mempool: fix get objects from mempool with cache Content-Language: en-US To: =?UTF-8?Q?Morten_Br=c3=b8rup?= , olivier.matz@6wind.com Cc: bruce.richardson@intel.com, jerinjacobk@gmail.com, dev@dpdk.org References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20220202081426.77975-1-mb@smartsharesystems.com> From: Andrew Rybchenko Organization: OKTET Labs In-Reply-To: <20220202081426.77975-1-mb@smartsharesystems.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Morten, In general I agree that the fix is required. In sent v3 I'm trying to make it a bit better from my point of view. See few notes below. On 2/2/22 11:14, Morten Brørup wrote: > A flush threshold for the mempool cache was introduced in DPDK version > 1.3, but rte_mempool_do_generic_get() was not completely updated back > then, and some inefficiencies were introduced. > > This patch fixes the following in rte_mempool_do_generic_get(): > > 1. The code that initially screens the cache request was not updated > with the change in DPDK version 1.3. > The initial screening compared the request length to the cache size, > which was correct before, but became irrelevant with the introduction of > the flush threshold. E.g. the cache can hold up to flushthresh objects, > which is more than its size, so some requests were not served from the > cache, even though they could be. > The initial screening has now been corrected to match the initial > screening in rte_mempool_do_generic_put(), which verifies that a cache > is present, and that the length of the request does not overflow the > memory allocated for the cache. > > This bug caused a major performance degradation in scenarios where the > application burst length is the same as the cache size. In such cases, > the objects were not ever fetched from the mempool cache, regardless if > they could have been. > This scenario occurs e.g. if an application has configured a mempool > with a size matching the application's burst size. > > 2. The function is a helper for rte_mempool_generic_get(), so it must > behave according to the description of that function. > Specifically, objects must first be returned from the cache, > subsequently from the ring. > After the change in DPDK version 1.3, this was not the behavior when > the request was partially satisfied from the cache; instead, the objects > from the ring were returned ahead of the objects from the cache. > This bug degraded application performance on CPUs with a small L1 cache, > which benefit from having the hot objects first in the returned array. > (This is probably also the reason why the function returns the objects > in reverse order, which it still does.) > Now, all code paths first return objects from the cache, subsequently > from the ring. > > The function was not behaving as described (by the function using it) > and expected by applications using it. This in itself is also a bug. > > 3. If the cache could not be backfilled, the function would attempt > to get all the requested objects from the ring (instead of only the > number of requested objects minus the objects available in the ring), > and the function would fail if that failed. > Now, the first part of the request is always satisfied from the cache, > and if the subsequent backfilling of the cache from the ring fails, only > the remaining requested objects are retrieved from the ring. > > The function would fail despite there are enough objects in the cache > plus the common pool. > > 4. The code flow for satisfying the request from the cache was slightly > inefficient: > The likely code path where the objects are simply served from the cache > was treated as unlikely. Now it is treated as likely. > And in the code path where the cache was backfilled first, numbers were > added and subtracted from the cache length; now this code path simply > sets the cache length to its final value. I've just sent v3 with suggested changes to the patch. > > v2 changes > - Do not modify description of return value. This belongs in a separate > doc fix. > - Elaborate even more on which bugs the modifications fix. > > Signed-off-by: Morten Brørup > --- > lib/mempool/rte_mempool.h | 75 ++++++++++++++++++++++++++++----------- > 1 file changed, 54 insertions(+), 21 deletions(-) > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index 1e7a3c1527..2898c690b0 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h > @@ -1463,38 +1463,71 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > uint32_t index, len; > void **cache_objs; > > - /* No cache provided or cannot be satisfied from cache */ > - if (unlikely(cache == NULL || n >= cache->size)) > + /* No cache provided or if get would overflow mem allocated for cache */ > + if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE)) The second condition is unnecessary until we try to fill in cache from backend. > goto ring_dequeue; > > - cache_objs = cache->objs; > + cache_objs = &cache->objs[cache->len]; > + > + if (n <= cache->len) { > + /* The entire request can be satisfied from the cache. */ > + cache->len -= n; > + for (index = 0; index < n; index++) > + *obj_table++ = *--cache_objs; > + > + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); > > - /* Can this be satisfied from the cache? */ > - if (cache->len < n) { > - /* No. Backfill the cache first, and then fill from it */ > - uint32_t req = n + (cache->size - cache->len); > + return 0; > + } > > - /* How many do we require i.e. number to fill the cache + the request */ > - ret = rte_mempool_ops_dequeue_bulk(mp, > - &cache->objs[cache->len], req); > + /* Satisfy the first part of the request by depleting the cache. */ > + len = cache->len; > + for (index = 0; index < len; index++) > + *obj_table++ = *--cache_objs; I dislike duplication of these lines here and above. See v3. > + > + /* Number of objects remaining to satisfy the request. */ > + len = n - len; > + > + /* Fill the cache from the ring; fetch size + remaining objects. */ > + ret = rte_mempool_ops_dequeue_bulk(mp, cache->objs, > + cache->size + len); > + if (unlikely(ret < 0)) { > + /* > + * We are buffer constrained, and not able to allocate > + * cache + remaining. > + * Do not fill the cache, just satisfy the remaining part of > + * the request directly from the ring. > + */ > + ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, len); I dislike the duplication as well. We can goto ring_dequeue instead. See v3. > if (unlikely(ret < 0)) { > /* > - * In the off chance that we are buffer constrained, > - * where we are not able to allocate cache + n, go to > - * the ring directly. If that fails, we are truly out of > - * buffers. > + * That also failed. > + * No further action is required to roll the first > + * part of the request back into the cache, as both > + * cache->len and the objects in the cache are intact. > */ > - goto ring_dequeue; > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); > + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); > + > + return ret; > } > > - cache->len += req; > + /* Commit that the cache was emptied. */ > + cache->len = 0; > + > + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); > + > + return 0; > } > > - /* Now fill in the response ... */ > - for (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++) > - *obj_table = cache_objs[len]; > + cache_objs = &cache->objs[cache->size + len]; > > - cache->len -= n; > + /* Satisfy the remaining part of the request from the filled cache. */ > + cache->len = cache->size; > + for (index = 0; index < len; index++) > + *obj_table++ = *--cache_objs; > > RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); > @@ -1503,7 +1536,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > > ring_dequeue: > > - /* get remaining objects from ring */ > + /* Get the objects from the ring. */ > ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n); > > if (ret < 0) {