From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id CFBD0F04 for ; Fri, 24 Nov 2017 10:30:33 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Nov 2017 01:30:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,446,1505804400"; d="scan'208";a="11388768" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.35]) by orsmga002.jf.intel.com with SMTP; 24 Nov 2017 01:30:30 -0800 Received: by (sSMTP sendmail emulation); Fri, 24 Nov 2017 09:30:30 +0000 Date: Fri, 24 Nov 2017 09:30:29 +0000 From: Bruce Richardson To: Roy Shterman Cc: dev@dpdk.org Message-ID: <20171124093029.GB11040@bricha3-MOBL3.ger.corp.intel.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Intel Research and Development Ireland Ltd. User-Agent: Mutt/1.9.1 (2017-09-22) Subject: Re: [dpdk-dev] Question about cache_size in rte_mempool_create X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Nov 2017 09:30:34 -0000 On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote: > Hi, > > In the documentation it says that: > > * @param cache_size > * If cache_size is non-zero, the rte_mempool library will try to > * limit the accesses to the common lockless pool, by maintaining a > * per-lcore object cache. This argument must be lower or equal to > * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to > choose* > * * cache_size to have "n modulo cache_size == 0": if this is* > * * not the case, some elements will always stay in the pool and will* > * * never be used.* The access to the per-lcore table is of course > * faster than the multi-producer/consumer pool. The cache can be > * disabled if the cache_size argument is set to 0; it can be useful to > * avoid losing objects in cache. > > I wonder if someone can please explain the high-lightened sentence, how the > cache size affects the objects inside the ring. It has no effects upon the objects themselves. Having a cache is strongly recommended for performance reasons. Accessing a shared ring for a mempool is very slow compared to pulling packets from a per-core cache. To test this you can run testpmd using different --mbcache parameters. > And also does it mean that > if I'm sharing pool between different cores can it be that a core sees the > pool as empty although it has objects in it? > Yes, that can occur. You need to dimension the pool to take account of your cache usage. /Bruce