From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by dpdk.org (Postfix) with ESMTP id 02FDD236 for ; Fri, 24 Nov 2017 10:39:56 +0100 (CET) Received: by mail-wm0-f49.google.com with SMTP id y80so21224612wmd.0 for ; Fri, 24 Nov 2017 01:39:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language; bh=c214CToTbbuyOS/ljmaZ7xmyrAXK7/zXP2NsD+3GXss=; b=CmO/Rbg64Xkzt6VPZiJ12ddvhKvcuNOqK6vO67dScIvjRJM7/buF206Z1d9Tpb4ucd XNfI8BT9NKGkjhwYyxnEnkPj5dMywJTlt5THs6UgCuihA/mrpfGWIPg71+K24hgASbGD q4ZCXcVpoGGlPMytUVEAwBG6K8jAEgQlEF51ssOL6UuvfpHivTPAatYin9+ZCRWmgOl2 sS/7bZ4l14mg9P12b7+YYb/RbIjfeDwpxEFC6DK2Q1Tvrjo0O4hf/te8fYrHmhCs9f8O UZnxgFn1HraE1mPZjathZ4dM06MuoaNT+CQCR6NvbMLCkPTX8PKDIkHZlZTbgs918ftt XJeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=c214CToTbbuyOS/ljmaZ7xmyrAXK7/zXP2NsD+3GXss=; b=jEsNitW41guEF4D0o9YZy0Iy96Mkv2/DNFuKj7twNozdQNzTZkWeYzyk5IePS9HcaQ U7vxKD0tlfdGxalsC1oaFeU+lWkYTc/eCJh7YEWO5HiBYyJ9ZXhKSsdnqE24yEn5BWBV FMX47dLkLZHaqgAQiZXcG4gyyWzKV5i/kRREhAiXDTCZJCfopp7S4lFtVnXauagSGECe t74QkJs1PQoiHZ3K8VJZtgGaLqvbJcRCrXFeubG6/v+SeeB+9PYIVDiijwdfnhnbdt8O 9B3oZpFbkyHPaj7Pd4kZWElx04/nFBJJQ08WNvUV3SJ7mc/sIaq8L9BygdMMHOu5HhBK 5Zww== X-Gm-Message-State: AJaThX5QFCs5AI6iKCdw/Xtd1GEov08WpwgSFAnabXNBwVHxgb5cXiO+ 6ae18ID9P0cx0whrH5rloBRBgMgW X-Google-Smtp-Source: AGs4zMYAymsYxugDd6S4zMJAnNpOyonLzRAl2O0XTf+vGu0WZ4h3ckfAsAQFxQ5XUNbwB6wWicaqaw== X-Received: by 10.28.120.2 with SMTP id t2mr7936191wmc.5.1511516396558; Fri, 24 Nov 2017 01:39:56 -0800 (PST) Received: from [10.0.0.5] (46-117-249-6.bb.netvision.net.il. [46.117.249.6]) by smtp.gmail.com with ESMTPSA id 12sm9168712wmn.1.2017.11.24.01.39.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 24 Nov 2017 01:39:56 -0800 (PST) To: Bruce Richardson Cc: dev@dpdk.org References: <20171124093029.GB11040@bricha3-MOBL3.ger.corp.intel.com> From: roy Message-ID: <046269c5-40c1-53c6-d58f-61ec5401ceb7@gmail.com> Date: Fri, 24 Nov 2017 11:39:54 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <20171124093029.GB11040@bricha3-MOBL3.ger.corp.intel.com> Content-Language: en-US Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Question about cache_size in rte_mempool_create X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Nov 2017 09:39:57 -0000 Thanks for your answer, but I cannot understand the dimension of the ring and it is affected by the cache size. On 24/11/17 11:30, Bruce Richardson wrote: > On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote: >> Hi, >> >> In the documentation it says that: >> >> * @param cache_size >> * If cache_size is non-zero, the rte_mempool library will try to >> * limit the accesses to the common lockless pool, by maintaining a >> * per-lcore object cache. This argument must be lower or equal to >> * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to >> choose* >> * * cache_size to have "n modulo cache_size == 0": if this is* >> * * not the case, some elements will always stay in the pool and will* >> * * never be used.* The access to the per-lcore table is of course >> * faster than the multi-producer/consumer pool. The cache can be >> * disabled if the cache_size argument is set to 0; it can be useful to >> * avoid losing objects in cache. >> >> I wonder if someone can please explain the high-lightened sentence, how the >> cache size affects the objects inside the ring. > It has no effects upon the objects themselves. Having a cache is > strongly recommended for performance reasons. Accessing a shared ring > for a mempool is very slow compared to pulling packets from a per-core > cache. To test this you can run testpmd using different --mbcache > parameters. Still, I didn't understand the sentence from above: *It is advised to choose cache_size to have "n modulo cache_size == 0": if this is* not the case, some elements will always stay in the pool and will* never be used.* > >> And also does it mean that >> if I'm sharing pool between different cores can it be that a core sees the >> pool as empty although it has objects in it? >> > Yes, that can occur. You need to dimension the pool to take account of > your cache usage. can you please elaborate more on this issue? I'm working with multi-consumer multi-producer pools, in my understanding object can or in lcore X cache or in ring. Each core when looking for objects in pool (ring) is looking at prod/cons head/tail so how can it be that the cache of different cores affects this? > > /Bruce