From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 3325E2B8B for ; Fri, 24 Nov 2017 12:01:12 +0100 (CET) Received: by mail-wm0-f47.google.com with SMTP id g130so20021159wme.0 for ; Fri, 24 Nov 2017 03:01:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=p1L2xiBh38L4z5pwAO6NvH8MEAQ9xSRgbH/t1QWyT0o=; b=t2A2nuY6+QVssynrqgLqp54LxkBYHso3Gstd6refiewggX+PzcIOTBYy68N+LClMkC 9tQflsSr5Mrvdwh0R2qQbmIRhBGuZhr+2G51JaAgcTyKEBw18vbSYZgQmwHJml5F+JNI S5rX4jfAJhjWMBkZGoaUceQuEhrRXiInFfKTaL92hCbm8RWcYOD+ceSGMZJKIc7ZE8MT rbIVM7ODA7iDimewHF9CZJN/zBWJP+W0e5jSzRrndW3f/6oCZXTxWIA2Xa9LAPsXX2fu L+BtY9dU6EYr6D3o8CpCZ9rIgR2GvgDstOvVTxKVxsagYPCkuN76itTpOL4Q00etG4FI 7B6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=p1L2xiBh38L4z5pwAO6NvH8MEAQ9xSRgbH/t1QWyT0o=; b=SvLW2h9lESmx3MufYtJodi+GFIYc+UwCkL7QmN11UyTPVEnuHxp81DIfpriISRmTAU kJXYGq4QNDqgmeW3sYqKDZTBNXK+dXl+51TWx6U0FfxJyq5SnDBmXHZO27occrpG09sC SqztCwaJG6u/8fXavuHAjw9DrCjh2Wob0SRIJqmp20EMt2rjHRxzuu9VxORsl5/w19en IVD0rykSXAKsiT3dd9b3Jh/lDK29WLTJ0XFn0xZu/EycpRP4UfX1cJKD5Y7AjaewiFPf gzpKY2d9++SfOSUnBFk5309TUEpUypQt2ixWX5F/4/CLbR2OBrVRPx4FNHzL/pD/lJ5x E/qQ== X-Gm-Message-State: AJaThX7KnmpKkZMiDGL5a1n1oPs40TloekHKFbb2+8Gmk3a0UhCFwJ65 XmA0Mvl6C/9Fqmynk8mbMSxlIbOS X-Google-Smtp-Source: AGs4zMbSqDHJky/tm3o31leT3nJvBGeNP/UYLuHYcD1Jd7655JvIEHDEGw3O3u+99UhWaIsjpHd54w== X-Received: by 10.80.214.136 with SMTP id r8mr39440512edi.17.1511521271613; Fri, 24 Nov 2017 03:01:11 -0800 (PST) Received: from [10.174.210.117] ([2.53.32.93]) by smtp.gmail.com with ESMTPSA id s6sm14796559edc.2.2017.11.24.03.01.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 24 Nov 2017 03:01:10 -0800 (PST) Mime-Version: 1.0 (1.0) From: Roy Shterman X-Mailer: iPhone Mail (14B100) In-Reply-To: <20171124100333.GA6900@bricha3-MOBL3.ger.corp.intel.com> Date: Fri, 24 Nov 2017 13:01:08 +0200 Cc: dev@dpdk.org Message-Id: <612E32E6-02A0-4903-B3F0-1DD18D8430E5@gmail.com> References: <20171124093029.GB11040@bricha3-MOBL3.ger.corp.intel.com> <046269c5-40c1-53c6-d58f-61ec5401ceb7@gmail.com> <20171124100333.GA6900@bricha3-MOBL3.ger.corp.intel.com> To: Bruce Richardson Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Question about cache_size in rte_mempool_create X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Nov 2017 11:01:12 -0000 =D7=A0=D7=A9=D7=9C=D7=97 =D7=9E=D7=94-iPhone =D7=A9=D7=9C=D7=99 =E2=80=AB=D7=91-24 =D7=91=D7=A0=D7=95=D7=91=D7=B3 2017, =D7=91=D7=A9=D7=A2=D7= =94 12:03, =E2=80=8F=E2=80=8FBruce Richardson =E2=80=8F =D7=9B=D7=AA=D7=91/=D7=94:=E2=80=AC >> On Fri, Nov 24, 2017 at 11:39:54AM +0200, roy wrote: >> Thanks for your answer, but I cannot understand the dimension of the ring= >> and it is affected by the cache size. >>=20 >>> On 24/11/17 11:30, Bruce Richardson wrote: >>>> On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote: >>>> Hi, >>>>=20 >>>> In the documentation it says that: >>>>=20 >>>> * @param cache_size >>>> * If cache_size is non-zero, the rte_mempool library will try to >>>> * limit the accesses to the common lockless pool, by maintaining a >>>> * per-lcore object cache. This argument must be lower or equal to >>>> * CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to >>>> choose* >>>> * * cache_size to have "n modulo cache_size =3D=3D 0": if this is* >>>> * * not the case, some elements will always stay in the pool and will= * >>>> * * never be used.* The access to the per-lcore table is of course >>>> * faster than the multi-producer/consumer pool. The cache can be >>>> * disabled if the cache_size argument is set to 0; it can be useful t= o >>>> * avoid losing objects in cache. >>>>=20 >>>> I wonder if someone can please explain the high-lightened sentence, how= the >>>> cache size affects the objects inside the ring. >>> It has no effects upon the objects themselves. Having a cache is >>> strongly recommended for performance reasons. Accessing a shared ring >>> for a mempool is very slow compared to pulling packets from a per-core >>> cache. To test this you can run testpmd using different --mbcache >>> parameters. >> Still, I didn't understand the sentence from above: >>=20 >> *It is advised to choose cache_size to have "n modulo cache_size =3D=3D 0= ": if >> this is* not the case, some elements will always stay in the pool and wil= l* >> never be used.* >>=20 >=20 > This would be an artifact of the way in which the elements are taken > from the pool ring. If a full cache-size burst of elements is not > available in the ring, no elements from the ring are put in the cache. > It just means that the pool can never fully be emptied. However, in most > cases, even having the pool nearly empty indicates a problem, so > practically I wouldn't be worried about this. >=20 But in case we tried to get cache size from pool and failed we will try to g= et the number on objects defined in rte_mempool_get_bulk, so in case rte_mem= pool_get() usage we will try to get one object out of the pool (ring) so als= o if there isn't cache size in the ring itself each core can get 1 object in= each rte_memoool_get until the pool is empty, am I wrong? >>>=20 >>>> And also does it mean that >>>> if I'm sharing pool between different cores can it be that a core sees t= he >>>> pool as empty although it has objects in it? >>>>=20 >>> Yes, that can occur. You need to dimension the pool to take account of >>> your cache usage. >>=20 >> can you please elaborate more on this issue? I'm working with multi-consu= mer >> multi-producer pools, in my understanding object can or in lcore X cache o= r >> in ring. >> Each core when looking for objects in pool (ring) is looking at prod/cons= >> head/tail so how can it be that the cache of different cores affects this= ? >>=20 >=20 > Mempool elements in the caches are free elements that are available for > use. However, they are inaccessible to any core except the core which > freed them to that cache. For a slightly simplified example, consider a > pool with 256 elements, and a cache size of 64. If 4 cores all request 1 > buffer each, those four cores will each fill their caches and then take > 1 element from those caches. This means that the ring will be empty even > though only 4 buffers are actually in use - the other 63*4 buffers are > in per-core caches. A 5th core which comes along and requests a buffer > will be told that the pool is empty, despite there being 252 free > elements. >=20 > Therefore you need to take account of the possibilities of buffers being > stored in caches when dimensioning your mempool. >=20 > /Bruce