From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08DF041C95; Tue, 14 Feb 2023 15:16:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DD4E3410EF; Tue, 14 Feb 2023 15:16:17 +0100 (CET) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 0EBF2410D1 for ; Tue, 14 Feb 2023 15:16:17 +0100 (CET) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 4A58767; Tue, 14 Feb 2023 17:16:16 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 4A58767 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1676384176; bh=DOM6aCWVC+zzfz44kqj6oGyXvAWDjmn2oeqSwmg/Q5A=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=suvYqWkS184XSkyDW/Zx7D76M7H/M9Ca2BQoQAs1r7Llp+rLRW6fVDk2s9O3dtUC/ xaZJb48YPjheMrmpCg/imOdtdAA4qsPoWoS/vLI6eWMIaDWnoISn62IJeEdh0rgXGe UEbxA9nXBdXV6Obpve4+S/nmA7i5Q4IW4lq4v0e8= Message-ID: <6bac1a3c-e0d1-e36b-051b-0899da6fe5e2@oktetlabs.ru> Date: Tue, 14 Feb 2023 17:16:15 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Subject: Re: [PATCH v8] mempool cache: add zero-copy get and put functions To: =?UTF-8?Q?Morten_Br=c3=b8rup?= , Olivier Matz Cc: Kamalakshitha Aligeri , bruce.richardson@intel.com, konstantin.ananyev@huawei.com, dev@dpdk.org, nd , david.marchand@redhat.com, Honnappa Nagarahalli References: <98CBD80474FA8B44BF855DF32C47DC35D87488@smartserver.smartshare.dk> <20230209145833.129986-1-mb@smartsharesystems.com> <98CBD80474FA8B44BF855DF32C47DC35D87732@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D87734@smartserver.smartshare.dk> Content-Language: en-US From: Andrew Rybchenko Organization: OKTET Labs In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87734@smartserver.smartshare.dk> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2/13/23 13:25, Morten Brørup wrote: >> From: Olivier Matz [mailto:olivier.matz@6wind.com] >> Sent: Monday, 13 February 2023 10.37 >> >> Hello, >> >> Thank you for this work, and sorry for the late feedback too. > > Better late than never. And it's a core library, so important to get it right! > >> >> On Mon, Feb 13, 2023 at 04:29:51AM +0000, Honnappa Nagarahalli wrote: >>> >>> >>>>>> +/** >>>>>> + * @internal used by rte_mempool_cache_zc_put_bulk() and >>>>>> rte_mempool_do_generic_put(). >>>>>> + * >>>>>> + * Zero-copy put objects in a mempool cache backed by the >> specified >>>>>> mempool. >>>>>> + * >>>>>> + * @param cache >>>>>> + * A pointer to the mempool cache. >>>>>> + * @param mp >>>>>> + * A pointer to the mempool. >>>>>> + * @param n >>>>>> + * The number of objects to be put in the mempool cache. >>>>>> + * @return >>>>>> + * The pointer to where to put the objects in the mempool >> cache. >>>>>> + * NULL if the request itself is too big for the cache, i.e. >>>>>> + * exceeds the cache flush threshold. >>>>>> + */ >>>>>> +static __rte_always_inline void ** >>>>>> +__rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache >> *cache, >>>>>> + struct rte_mempool *mp, >>>>>> + unsigned int n) >>>>>> +{ >>>>>> + void **cache_objs; >>>>>> + >>>>>> + RTE_ASSERT(cache != NULL); >>>>>> + RTE_ASSERT(mp != NULL); >>>>>> + >>>>>> + if (n <= cache->flushthresh - cache->len) { >> >> The previous code was doing this test instead: >> >> if (cache->len + n <= cache->flushthresh) >> >> I know there is an invariant asserting that cache->len <= cache- >>> threshold, >> so there is no real issue, but I'll tend to say that it is a good >> practise >> to avoid substractions on unsigned values to avoid the risk of >> wrapping. >> >> I also think the previous test was a bit more readable. > > I agree with you, but I didn't object to Andrew's recommendation of changing it to this, so I did. > > I will change it back. Konstantin, I hope you don't mind. :-) I've suggested to use minus here to ensure that we handle extremely big 'n' value here correctly (which would result in addition overflow). > > [...] > >>>>>> +/** >>>>>> + * @warning >>>>>> + * @b EXPERIMENTAL: This API may change, or be removed, >> without >>>>> prior >>>>>> notice. >>>>>> + * >>>>>> + * Zero-copy put objects in a mempool cache backed by the >> specified >>>>>> mempool. >> >> I think we should document the differences and advantage of using this >> function over the standard version, explaining which copy is avoided, >> why it is faster, ... >> >> Also, we should say that once this function is called, the user has >> to copy the objects to the cache. >> > > I agree, the function descriptions could be more verbose. > > If we want to get this feature into DPDK now, we can postpone the descriptions improvements to a later patch. No strong opinion, but I'd wait for description improvements. It is very important to have good description from the very beginning. I'll try to find time this week to help, but can't promise. May be it is already late... > > [...] > >>> Earlier there was a discussion on the API name. >>> IMO, we should keep the API names similar to those in ring library. >> This would provide consistency across the libraries. >>> There were some concerns expressed in PMD having to call 2 APIs. I do >> not think changing to 2 APIs will have any perf impact. >> >> I'm not really convinced by the API names too. Again, sorry, I know >> this >> comment arrives after the battle. >> >> Your proposal is: >> >> /* Zero-copy put objects in a mempool cache backed by the specified >> mempool. */ >> rte_mempool_cache_zc_put_bulk(cache, mp, n) >> >> /* Zero-copy get objects from a mempool cache backed by the specified >> mempool. */ >> rte_mempool_cache_zc_get_bulk(cache, mp, n) >> >> Here are some observations: >> >> - This was said in the discussion previously, but the functions do not >> really get or put objects in the cache. Instead, they prepare the >> cache (filling it or flushing it if needed) and update its length so >> that the user can do the effective copy. > > Can be fixed by improving function descriptions. > >> >> - The "_cache" is superfluous for me: these functions do not deal more >> with the cache than the non zero-copy version > > I have been thinking of these as "mempool cache" APIs. > > I don't mind getting rid of "_cache" in their names, if we agree that they are "mempool" functions, instead of "mempool cache" functions. > >> >> - The order of the parameters is (cache, mp, n) while the other >> functions >> that take a mempool and a cache as parameters have the mp first (see >> _generic versions). > > The order of the parameters was due to considering these as "mempool cache" functions, so I followed the convention for an existing "mempool cache" function: > > rte_mempool_cache_flush(struct rte_mempool_cache *cache, > struct rte_mempool *mp); > > If we instead consider them as simple "mempool" functions, I agree with you about the parameter ordering. > > So, what does the community think... Are these "mempool cache" functions, or just "mempool" functions? Since 'cache' is mandatory here (it cannot be NULL), I agree that it is 'mempool cache' API, not 'mempool API'. > >> >> - The "_bulk" is indeed present on other functions, but not all (the >> generic >> version does not have it), I'm not sure it is absolutely required > > The mempool library offers both single-object and bulk functions, so the function names must include "_bulk". I have no strong opinion here. Yes, "bulk" is nice for consistency, but IMHO not strictly required since it makes function name longer and there is no value in single-object version of these functions. > >> >> What do you think about these API below? >> >> rte_mempool_prepare_zc_put(mp, n, cache) >> rte_mempool_prepare_zc_get(mp, n, cache) > > I initially used "prepare" in the names, but since we don't have accompanying "commit" functions, I decided against "prepare" to avoid confusion. (Any SQL developer will probably agree with me on this.) prepare -> reserve? However, in the case of get we really do get. When function returns corresponding objects are fully got from mempool. Yes, in the case of put we need to copy object pointers into provided space. However, we don't need to call any mempool (cache) API to commit it. Plus API naming symmetry. So, I'd *not* add prepare/reserve to function names. > >> >>> >>> Also, what is the use case for the 'rewind' API? >> >> +1 >> >> I have the same feeling that rewind() is not required now. It can be >> added later if we find a use-case. >> >> In case we want to keep it, I think we need to better specify in the >> API >> comments in which unique conditions the function can be called >> (i.e. after a call to rte_mempool_prepare_zc_put() with the same number >> of objects, given no other operations were done on the mempool in >> between). A call outside of these conditions has an undefined behavior. > > Please refer to my answer to Honnappa on this topic. >