From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0A1C220F for ; Fri, 24 Nov 2017 14:19:33 +0100 (CET) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Nov 2017 05:19:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,447,1505804400"; d="scan'208";a="180071159" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.35]) by fmsmga006.fm.intel.com with SMTP; 24 Nov 2017 05:19:26 -0800 Received: by (sSMTP sendmail emulation); Fri, 24 Nov 2017 13:19:25 +0000 Date: Fri, 24 Nov 2017 13:19:25 +0000 From: Bruce Richardson To: Roy Shterman Cc: dev@dpdk.org Message-ID: <20171124131924.GA8116@bricha3-MOBL3.ger.corp.intel.com> References: <20171124093029.GB11040@bricha3-MOBL3.ger.corp.intel.com> <046269c5-40c1-53c6-d58f-61ec5401ceb7@gmail.com> <20171124100333.GA6900@bricha3-MOBL3.ger.corp.intel.com> <612E32E6-02A0-4903-B3F0-1DD18D8430E5@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <612E32E6-02A0-4903-B3F0-1DD18D8430E5@gmail.com> Organization: Intel Research and Development Ireland Ltd. User-Agent: Mutt/1.9.1 (2017-09-22) Subject: Re: [dpdk-dev] Question about cache_size in rte_mempool_create X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Nov 2017 13:19:34 -0000 On Fri, Nov 24, 2017 at 01:01:08PM +0200, Roy Shterman wrote: > > > נשלח מה-iPhone שלי > > ‫ב-24 בנוב׳ 2017, בשעה 12:03, ‏‏Bruce Richardson ‏ כתב/ה:‬ > > >> On Fri, Nov 24, 2017 at 11:39:54AM +0200, roy wrote: > >> Thanks for your answer, but I cannot understand the dimension of the ring > >> and it is affected by the cache size. > >> > >>> On 24/11/17 11:30, Bruce Richardson wrote: > >>>> On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote: > >>>> Hi, > >>>> > >>>> In the documentation it says that: > >>>> > >>>> * @param cache_size > >>>> * If cache_size is non-zero, the rte_mempool library will try to > >>>> * limit the accesses to the common lockless pool, by maintaining a > >>>> * per-lcore object cache. This argument must be lower or equal to > >>>> * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to > >>>> choose* > >>>> * * cache_size to have "n modulo cache_size == 0": if this is* > >>>> * * not the case, some elements will always stay in the pool and will* > >>>> * * never be used.* The access to the per-lcore table is of course > >>>> * faster than the multi-producer/consumer pool. The cache can be > >>>> * disabled if the cache_size argument is set to 0; it can be useful to > >>>> * avoid losing objects in cache. > >>>> > >>>> I wonder if someone can please explain the high-lightened sentence, how the > >>>> cache size affects the objects inside the ring. > >>> It has no effects upon the objects themselves. Having a cache is > >>> strongly recommended for performance reasons. Accessing a shared ring > >>> for a mempool is very slow compared to pulling packets from a per-core > >>> cache. To test this you can run testpmd using different --mbcache > >>> parameters. > >> Still, I didn't understand the sentence from above: > >> > >> *It is advised to choose cache_size to have "n modulo cache_size == 0": if > >> this is* not the case, some elements will always stay in the pool and will* > >> never be used.* > >> > > > > This would be an artifact of the way in which the elements are taken > > from the pool ring. If a full cache-size burst of elements is not > > available in the ring, no elements from the ring are put in the cache. > > It just means that the pool can never fully be emptied. However, in most > > cases, even having the pool nearly empty indicates a problem, so > > practically I wouldn't be worried about this. > > > > But in case we tried to get cache size from pool and failed we will try to get the number on objects defined in rte_mempool_get_bulk, so in case rte_mempool_get() usage we will try to get one object out of the pool (ring) so also if there isn't cache size in the ring itself each core can get 1 object in each rte_memoool_get until the pool is empty, am I wrong? > If there is no cache, you can always get all elements out of the ring by getting one at a time, yes.